2023-07-12 05:16:56,834 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9 2023-07-12 05:16:56,851 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-12 05:16:56,869 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 05:16:56,870 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90, deleteOnExit=true 2023-07-12 05:16:56,870 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 05:16:56,871 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/test.cache.data in system properties and HBase conf 2023-07-12 05:16:56,871 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 05:16:56,872 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir in system properties and HBase conf 2023-07-12 05:16:56,872 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 05:16:56,873 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 05:16:56,873 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 05:16:57,005 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 05:16:57,414 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 05:16:57,420 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 05:16:57,421 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 05:16:57,421 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 05:16:57,421 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 05:16:57,422 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 05:16:57,422 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 05:16:57,423 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 05:16:57,423 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 05:16:57,424 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 05:16:57,424 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/nfs.dump.dir in system properties and HBase conf 2023-07-12 05:16:57,424 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir in system properties and HBase conf 2023-07-12 05:16:57,425 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 05:16:57,425 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 05:16:57,425 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 05:16:58,021 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 05:16:58,027 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 05:16:58,362 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 05:16:58,561 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 05:16:58,585 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:16:58,631 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:16:58,681 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/Jetty_localhost_localdomain_46143_hdfs____.2ecbqw/webapp 2023-07-12 05:16:58,873 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:46143 2023-07-12 05:16:58,885 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 05:16:58,885 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 05:16:59,429 WARN [Listener at localhost.localdomain/35039] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:16:59,504 WARN [Listener at localhost.localdomain/35039] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:16:59,521 WARN [Listener at localhost.localdomain/35039] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:16:59,527 INFO [Listener at localhost.localdomain/35039] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:16:59,531 INFO [Listener at localhost.localdomain/35039] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/Jetty_localhost_40999_datanode____.ngb8fw/webapp 2023-07-12 05:16:59,634 INFO [Listener at localhost.localdomain/35039] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40999 2023-07-12 05:17:00,100 WARN [Listener at localhost.localdomain/36953] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:00,123 WARN [Listener at localhost.localdomain/36953] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:00,130 WARN [Listener at localhost.localdomain/36953] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:00,133 INFO [Listener at localhost.localdomain/36953] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:00,143 INFO [Listener at localhost.localdomain/36953] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/Jetty_localhost_46251_datanode____.yba1sd/webapp 2023-07-12 05:17:00,240 INFO [Listener at localhost.localdomain/36953] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46251 2023-07-12 05:17:00,254 WARN [Listener at localhost.localdomain/40581] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:00,303 WARN [Listener at localhost.localdomain/40581] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:00,306 WARN [Listener at localhost.localdomain/40581] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:00,308 INFO [Listener at localhost.localdomain/40581] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:00,317 INFO [Listener at localhost.localdomain/40581] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/Jetty_localhost_33119_datanode____9cxpuo/webapp 2023-07-12 05:17:00,434 INFO [Listener at localhost.localdomain/40581] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33119 2023-07-12 05:17:00,455 WARN [Listener at localhost.localdomain/33317] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:00,689 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x29ce060e9ff837e2: Processing first storage report for DS-3041c203-4d72-4c75-a425-decdb827eb6e from datanode aa67dd5b-48c8-44ab-a821-ff1add2bb0a9 2023-07-12 05:17:00,691 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x29ce060e9ff837e2: from storage DS-3041c203-4d72-4c75-a425-decdb827eb6e node DatanodeRegistration(127.0.0.1:36333, datanodeUuid=aa67dd5b-48c8-44ab-a821-ff1add2bb0a9, infoPort=45189, infoSecurePort=0, ipcPort=40581, storageInfo=lv=-57;cid=testClusterID;nsid=2029322732;c=1689139018115), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 05:17:00,691 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5a290f4b12991e61: Processing first storage report for DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181 from datanode 2974569b-c7ac-4e48-bbf0-845a322afa24 2023-07-12 05:17:00,691 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5a290f4b12991e61: from storage DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181 node DatanodeRegistration(127.0.0.1:43601, datanodeUuid=2974569b-c7ac-4e48-bbf0-845a322afa24, infoPort=43155, infoSecurePort=0, ipcPort=36953, storageInfo=lv=-57;cid=testClusterID;nsid=2029322732;c=1689139018115), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:00,691 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcbd2470c5a1c3ba: Processing first storage report for DS-132c0fde-2522-4404-840d-733da76c03a3 from datanode bc697de6-8040-4b63-aa70-c7775bd0a646 2023-07-12 05:17:00,692 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcbd2470c5a1c3ba: from storage DS-132c0fde-2522-4404-840d-733da76c03a3 node DatanodeRegistration(127.0.0.1:43159, datanodeUuid=bc697de6-8040-4b63-aa70-c7775bd0a646, infoPort=43163, infoSecurePort=0, ipcPort=33317, storageInfo=lv=-57;cid=testClusterID;nsid=2029322732;c=1689139018115), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 05:17:00,692 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x29ce060e9ff837e2: Processing first storage report for DS-d6ef8cb6-b60c-45cd-9a42-3c2d577d31c3 from datanode aa67dd5b-48c8-44ab-a821-ff1add2bb0a9 2023-07-12 05:17:00,692 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x29ce060e9ff837e2: from storage DS-d6ef8cb6-b60c-45cd-9a42-3c2d577d31c3 node DatanodeRegistration(127.0.0.1:36333, datanodeUuid=aa67dd5b-48c8-44ab-a821-ff1add2bb0a9, infoPort=45189, infoSecurePort=0, ipcPort=40581, storageInfo=lv=-57;cid=testClusterID;nsid=2029322732;c=1689139018115), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:00,692 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5a290f4b12991e61: Processing first storage report for DS-907cbdc8-d563-4b5f-ae55-69c30d5ed441 from datanode 2974569b-c7ac-4e48-bbf0-845a322afa24 2023-07-12 05:17:00,692 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5a290f4b12991e61: from storage DS-907cbdc8-d563-4b5f-ae55-69c30d5ed441 node DatanodeRegistration(127.0.0.1:43601, datanodeUuid=2974569b-c7ac-4e48-bbf0-845a322afa24, infoPort=43155, infoSecurePort=0, ipcPort=36953, storageInfo=lv=-57;cid=testClusterID;nsid=2029322732;c=1689139018115), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:00,693 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcbd2470c5a1c3ba: Processing first storage report for DS-a65eafd5-b22a-415a-9365-1e5d59207138 from datanode bc697de6-8040-4b63-aa70-c7775bd0a646 2023-07-12 05:17:00,693 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcbd2470c5a1c3ba: from storage DS-a65eafd5-b22a-415a-9365-1e5d59207138 node DatanodeRegistration(127.0.0.1:43159, datanodeUuid=bc697de6-8040-4b63-aa70-c7775bd0a646, infoPort=43163, infoSecurePort=0, ipcPort=33317, storageInfo=lv=-57;cid=testClusterID;nsid=2029322732;c=1689139018115), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:00,873 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9 2023-07-12 05:17:00,943 INFO [Listener at localhost.localdomain/33317] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/zookeeper_0, clientPort=62508, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 05:17:00,957 INFO [Listener at localhost.localdomain/33317] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62508 2023-07-12 05:17:00,964 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:00,967 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:01,668 INFO [Listener at localhost.localdomain/33317] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e with version=8 2023-07-12 05:17:01,668 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/hbase-staging 2023-07-12 05:17:01,678 DEBUG [Listener at localhost.localdomain/33317] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 05:17:01,678 DEBUG [Listener at localhost.localdomain/33317] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 05:17:01,678 DEBUG [Listener at localhost.localdomain/33317] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 05:17:01,678 DEBUG [Listener at localhost.localdomain/33317] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 05:17:02,102 INFO [Listener at localhost.localdomain/33317] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 05:17:02,618 INFO [Listener at localhost.localdomain/33317] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:02,661 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:02,662 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:02,662 INFO [Listener at localhost.localdomain/33317] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:02,662 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:02,662 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:02,847 INFO [Listener at localhost.localdomain/33317] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:02,938 DEBUG [Listener at localhost.localdomain/33317] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 05:17:03,045 INFO [Listener at localhost.localdomain/33317] ipc.NettyRpcServer(120): Bind to /148.251.75.209:41085 2023-07-12 05:17:03,056 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:03,058 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:03,081 INFO [Listener at localhost.localdomain/33317] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41085 connecting to ZooKeeper ensemble=127.0.0.1:62508 2023-07-12 05:17:03,130 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:410850x0, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:03,142 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41085-0x1007f9c80ff0000 connected 2023-07-12 05:17:03,191 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:03,192 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:03,197 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:03,208 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41085 2023-07-12 05:17:03,208 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41085 2023-07-12 05:17:03,209 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41085 2023-07-12 05:17:03,209 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41085 2023-07-12 05:17:03,209 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41085 2023-07-12 05:17:03,247 INFO [Listener at localhost.localdomain/33317] log.Log(170): Logging initialized @7177ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 05:17:03,392 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:03,393 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:03,394 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:03,397 INFO [Listener at localhost.localdomain/33317] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 05:17:03,397 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:03,397 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:03,402 INFO [Listener at localhost.localdomain/33317] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:03,454 INFO [Listener at localhost.localdomain/33317] http.HttpServer(1146): Jetty bound to port 46839 2023-07-12 05:17:03,456 INFO [Listener at localhost.localdomain/33317] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:03,487 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:03,490 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70bfe8f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:03,491 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:03,491 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35456473{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:03,675 INFO [Listener at localhost.localdomain/33317] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:03,692 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:03,692 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:03,695 INFO [Listener at localhost.localdomain/33317] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 05:17:03,706 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:03,734 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@29a0d1a2{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/jetty-0_0_0_0-46839-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6608111985205449694/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 05:17:03,746 INFO [Listener at localhost.localdomain/33317] server.AbstractConnector(333): Started ServerConnector@7991162b{HTTP/1.1, (http/1.1)}{0.0.0.0:46839} 2023-07-12 05:17:03,746 INFO [Listener at localhost.localdomain/33317] server.Server(415): Started @7676ms 2023-07-12 05:17:03,751 INFO [Listener at localhost.localdomain/33317] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e, hbase.cluster.distributed=false 2023-07-12 05:17:03,836 INFO [Listener at localhost.localdomain/33317] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:03,836 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:03,836 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:03,837 INFO [Listener at localhost.localdomain/33317] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:03,837 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:03,837 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:03,844 INFO [Listener at localhost.localdomain/33317] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:03,854 INFO [Listener at localhost.localdomain/33317] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46611 2023-07-12 05:17:03,858 INFO [Listener at localhost.localdomain/33317] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:03,868 DEBUG [Listener at localhost.localdomain/33317] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:03,869 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:03,872 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:03,875 INFO [Listener at localhost.localdomain/33317] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46611 connecting to ZooKeeper ensemble=127.0.0.1:62508 2023-07-12 05:17:03,885 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:466110x0, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:03,887 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:466110x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:03,891 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46611-0x1007f9c80ff0001 connected 2023-07-12 05:17:03,891 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:03,892 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:03,899 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46611 2023-07-12 05:17:03,900 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46611 2023-07-12 05:17:03,901 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46611 2023-07-12 05:17:03,905 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46611 2023-07-12 05:17:03,906 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46611 2023-07-12 05:17:03,909 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:03,909 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:03,910 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:03,911 INFO [Listener at localhost.localdomain/33317] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:03,911 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:03,912 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:03,912 INFO [Listener at localhost.localdomain/33317] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:03,914 INFO [Listener at localhost.localdomain/33317] http.HttpServer(1146): Jetty bound to port 33241 2023-07-12 05:17:03,914 INFO [Listener at localhost.localdomain/33317] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:03,921 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:03,921 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3eb786a7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:03,922 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:03,923 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@588dc7af{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:04,057 INFO [Listener at localhost.localdomain/33317] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:04,059 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:04,059 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:04,059 INFO [Listener at localhost.localdomain/33317] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:04,060 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,064 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@354b4393{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/jetty-0_0_0_0-33241-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1927630148704505735/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:04,065 INFO [Listener at localhost.localdomain/33317] server.AbstractConnector(333): Started ServerConnector@62761c88{HTTP/1.1, (http/1.1)}{0.0.0.0:33241} 2023-07-12 05:17:04,065 INFO [Listener at localhost.localdomain/33317] server.Server(415): Started @7995ms 2023-07-12 05:17:04,083 INFO [Listener at localhost.localdomain/33317] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:04,084 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:04,084 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:04,085 INFO [Listener at localhost.localdomain/33317] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:04,085 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:04,085 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:04,086 INFO [Listener at localhost.localdomain/33317] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:04,088 INFO [Listener at localhost.localdomain/33317] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44619 2023-07-12 05:17:04,088 INFO [Listener at localhost.localdomain/33317] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:04,091 DEBUG [Listener at localhost.localdomain/33317] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:04,092 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:04,094 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:04,096 INFO [Listener at localhost.localdomain/33317] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44619 connecting to ZooKeeper ensemble=127.0.0.1:62508 2023-07-12 05:17:04,100 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:446190x0, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:04,103 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44619-0x1007f9c80ff0002 connected 2023-07-12 05:17:04,104 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:04,105 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:04,106 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:04,111 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44619 2023-07-12 05:17:04,112 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44619 2023-07-12 05:17:04,112 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44619 2023-07-12 05:17:04,112 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44619 2023-07-12 05:17:04,113 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44619 2023-07-12 05:17:04,115 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:04,115 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:04,115 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:04,116 INFO [Listener at localhost.localdomain/33317] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:04,116 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:04,116 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:04,116 INFO [Listener at localhost.localdomain/33317] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:04,117 INFO [Listener at localhost.localdomain/33317] http.HttpServer(1146): Jetty bound to port 33885 2023-07-12 05:17:04,117 INFO [Listener at localhost.localdomain/33317] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:04,123 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,124 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@792df3d0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:04,125 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,125 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@62322ad2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:04,262 INFO [Listener at localhost.localdomain/33317] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:04,264 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:04,264 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:04,265 INFO [Listener at localhost.localdomain/33317] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 05:17:04,266 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,267 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2be25988{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/jetty-0_0_0_0-33885-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2003616807448099516/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:04,268 INFO [Listener at localhost.localdomain/33317] server.AbstractConnector(333): Started ServerConnector@3702ea50{HTTP/1.1, (http/1.1)}{0.0.0.0:33885} 2023-07-12 05:17:04,268 INFO [Listener at localhost.localdomain/33317] server.Server(415): Started @8198ms 2023-07-12 05:17:04,279 INFO [Listener at localhost.localdomain/33317] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:04,279 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:04,279 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:04,280 INFO [Listener at localhost.localdomain/33317] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:04,280 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:04,280 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:04,280 INFO [Listener at localhost.localdomain/33317] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:04,282 INFO [Listener at localhost.localdomain/33317] ipc.NettyRpcServer(120): Bind to /148.251.75.209:35711 2023-07-12 05:17:04,282 INFO [Listener at localhost.localdomain/33317] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:04,283 DEBUG [Listener at localhost.localdomain/33317] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:04,285 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:04,287 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:04,289 INFO [Listener at localhost.localdomain/33317] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35711 connecting to ZooKeeper ensemble=127.0.0.1:62508 2023-07-12 05:17:04,293 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:357110x0, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:04,295 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:357110x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:04,296 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:357110x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:04,297 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:357110x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:04,300 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35711 2023-07-12 05:17:04,300 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35711-0x1007f9c80ff0003 connected 2023-07-12 05:17:04,305 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35711 2023-07-12 05:17:04,305 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35711 2023-07-12 05:17:04,307 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35711 2023-07-12 05:17:04,310 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35711 2023-07-12 05:17:04,313 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:04,314 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:04,314 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:04,315 INFO [Listener at localhost.localdomain/33317] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:04,315 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:04,315 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:04,315 INFO [Listener at localhost.localdomain/33317] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:04,316 INFO [Listener at localhost.localdomain/33317] http.HttpServer(1146): Jetty bound to port 45773 2023-07-12 05:17:04,316 INFO [Listener at localhost.localdomain/33317] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:04,329 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,330 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@546de504{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:04,330 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,331 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6687e8e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:04,455 INFO [Listener at localhost.localdomain/33317] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:04,462 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:04,462 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:04,463 INFO [Listener at localhost.localdomain/33317] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 05:17:04,464 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:04,466 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40d19b62{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/jetty-0_0_0_0-45773-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3803593916297719875/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:04,467 INFO [Listener at localhost.localdomain/33317] server.AbstractConnector(333): Started ServerConnector@1fdbb67e{HTTP/1.1, (http/1.1)}{0.0.0.0:45773} 2023-07-12 05:17:04,467 INFO [Listener at localhost.localdomain/33317] server.Server(415): Started @8397ms 2023-07-12 05:17:04,482 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:04,495 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@19a15b5{HTTP/1.1, (http/1.1)}{0.0.0.0:43955} 2023-07-12 05:17:04,495 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @8425ms 2023-07-12 05:17:04,496 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:04,507 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 05:17:04,510 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:04,529 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:04,529 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:04,529 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:04,529 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:04,529 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:04,531 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:04,533 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,41085,1689139021900 from backup master directory 2023-07-12 05:17:04,533 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:04,541 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:04,542 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 05:17:04,543 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:04,543 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:04,546 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 05:17:04,548 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 05:17:04,686 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/hbase.id with ID: f5ed9019-a7ce-4a38-a899-5fd3bcd29e63 2023-07-12 05:17:04,770 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:04,791 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:04,852 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x35c80c30 to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:04,894 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cf85811, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:04,924 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:04,926 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 05:17:04,942 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 05:17:04,942 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 05:17:04,944 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 05:17:04,948 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 05:17:04,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:04,981 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store-tmp 2023-07-12 05:17:05,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:05,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 05:17:05,019 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:05,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:05,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 05:17:05,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:05,019 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:05,019 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:05,021 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/WALs/jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:05,042 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41085%2C1689139021900, suffix=, logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/WALs/jenkins-hbase20.apache.org,41085,1689139021900, archiveDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/oldWALs, maxLogs=10 2023-07-12 05:17:05,100 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK] 2023-07-12 05:17:05,100 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK] 2023-07-12 05:17:05,100 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK] 2023-07-12 05:17:05,108 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 05:17:05,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/WALs/jenkins-hbase20.apache.org,41085,1689139021900/jenkins-hbase20.apache.org%2C41085%2C1689139021900.1689139025052 2023-07-12 05:17:05,183 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK], DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK]] 2023-07-12 05:17:05,184 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:05,184 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:05,189 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:05,191 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:05,256 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:05,263 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 05:17:05,298 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 05:17:05,312 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:05,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:05,320 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:05,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:05,342 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:05,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10093158400, jitterRate=-0.060001373291015625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:05,344 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:05,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 05:17:05,369 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 05:17:05,369 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 05:17:05,371 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 05:17:05,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-12 05:17:05,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 33 msec 2023-07-12 05:17:05,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 05:17:05,431 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 05:17:05,436 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 05:17:05,443 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 05:17:05,447 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 05:17:05,451 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 05:17:05,453 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:05,455 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 05:17:05,455 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 05:17:05,467 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 05:17:05,470 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:05,470 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:05,470 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:05,470 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:05,470 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:05,471 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,41085,1689139021900, sessionid=0x1007f9c80ff0000, setting cluster-up flag (Was=false) 2023-07-12 05:17:05,486 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:05,490 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 05:17:05,491 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:05,495 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:05,499 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 05:17:05,500 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:05,502 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.hbase-snapshot/.tmp 2023-07-12 05:17:05,570 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 05:17:05,574 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(951): ClusterId : f5ed9019-a7ce-4a38-a899-5fd3bcd29e63 2023-07-12 05:17:05,575 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(951): ClusterId : f5ed9019-a7ce-4a38-a899-5fd3bcd29e63 2023-07-12 05:17:05,574 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(951): ClusterId : f5ed9019-a7ce-4a38-a899-5fd3bcd29e63 2023-07-12 05:17:05,582 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:05,582 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:05,582 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:05,583 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 05:17:05,586 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:05,588 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 05:17:05,588 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:05,588 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:05,588 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:05,588 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 05:17:05,588 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:05,588 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:05,588 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:05,605 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:05,611 DEBUG [RS:1;jenkins-hbase20:44619] zookeeper.ReadOnlyZKClient(139): Connect 0x4a5c4ca2 to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:05,612 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:05,613 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:05,620 DEBUG [RS:0;jenkins-hbase20:46611] zookeeper.ReadOnlyZKClient(139): Connect 0x3b5e8d80 to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:05,620 DEBUG [RS:2;jenkins-hbase20:35711] zookeeper.ReadOnlyZKClient(139): Connect 0x24f517be to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:05,636 DEBUG [RS:2;jenkins-hbase20:35711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a32fd50, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:05,637 DEBUG [RS:1;jenkins-hbase20:44619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@795b0eb6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:05,638 DEBUG [RS:2;jenkins-hbase20:35711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e0d7cce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:05,638 DEBUG [RS:1;jenkins-hbase20:44619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f087971, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:05,637 DEBUG [RS:0;jenkins-hbase20:46611] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14ed1857, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:05,639 DEBUG [RS:0;jenkins-hbase20:46611] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6208c2ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:05,669 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:44619 2023-07-12 05:17:05,670 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:46611 2023-07-12 05:17:05,675 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:35711 2023-07-12 05:17:05,676 INFO [RS:1;jenkins-hbase20:44619] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:05,676 INFO [RS:1;jenkins-hbase20:44619] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:05,677 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:05,677 INFO [RS:0;jenkins-hbase20:46611] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:05,678 INFO [RS:0;jenkins-hbase20:46611] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:05,677 INFO [RS:2;jenkins-hbase20:35711] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:05,678 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:05,678 INFO [RS:2;jenkins-hbase20:35711] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:05,678 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:05,680 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:46611, startcode=1689139023835 2023-07-12 05:17:05,680 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:35711, startcode=1689139024278 2023-07-12 05:17:05,680 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:44619, startcode=1689139024083 2023-07-12 05:17:05,704 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:05,705 DEBUG [RS:1;jenkins-hbase20:44619] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:05,705 DEBUG [RS:0;jenkins-hbase20:46611] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:05,705 DEBUG [RS:2;jenkins-hbase20:35711] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:05,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 05:17:05,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 05:17:05,768 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 05:17:05,768 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 05:17:05,770 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:05,770 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:05,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:05,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:05,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-12 05:17:05,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:05,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:05,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:05,774 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54399, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:05,774 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34457, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:05,774 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48779, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:05,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689139055783 2023-07-12 05:17:05,790 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 05:17:05,793 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:05,794 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 05:17:05,795 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:05,796 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 05:17:05,798 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:05,805 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 05:17:05,806 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 05:17:05,806 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:05,806 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 05:17:05,807 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 05:17:05,809 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:05,810 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:05,813 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 05:17:05,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 05:17:05,817 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 05:17:05,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 05:17:05,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 05:17:05,823 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139025823,5,FailOnTimeoutGroup] 2023-07-12 05:17:05,831 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139025824,5,FailOnTimeoutGroup] 2023-07-12 05:17:05,831 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:05,831 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 05:17:05,833 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:05,834 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:05,835 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 05:17:05,835 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 05:17:05,835 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 05:17:05,835 WARN [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 05:17:05,835 WARN [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 05:17:05,835 WARN [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 05:17:05,887 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:05,888 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:05,889 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e 2023-07-12 05:17:05,936 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:44619, startcode=1689139024083 2023-07-12 05:17:05,936 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:35711, startcode=1689139024278 2023-07-12 05:17:05,939 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:46611, startcode=1689139023835 2023-07-12 05:17:05,940 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:05,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 05:17:05,944 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:05,946 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:05,946 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 05:17:05,948 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/info 2023-07-12 05:17:05,948 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 05:17:05,949 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:05,950 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 05:17:05,952 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:05,952 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:05,953 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e 2023-07-12 05:17:05,953 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 05:17:05,953 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35039 2023-07-12 05:17:05,953 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46839 2023-07-12 05:17:05,954 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:05,954 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:05,954 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 05:17:05,955 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e 2023-07-12 05:17:05,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:05,955 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35039 2023-07-12 05:17:05,955 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46839 2023-07-12 05:17:05,956 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e 2023-07-12 05:17:05,956 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35039 2023-07-12 05:17:05,956 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46839 2023-07-12 05:17:05,956 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 05:17:05,957 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:05,958 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 05:17:05,961 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/table 2023-07-12 05:17:05,961 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:05,962 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 05:17:05,962 DEBUG [RS:0;jenkins-hbase20:46611] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:05,963 WARN [RS:0;jenkins-hbase20:46611] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:05,963 INFO [RS:0;jenkins-hbase20:46611] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:05,963 DEBUG [RS:1;jenkins-hbase20:44619] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:05,964 DEBUG [RS:2;jenkins-hbase20:35711] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:05,964 WARN [RS:1;jenkins-hbase20:44619] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:05,964 WARN [RS:2;jenkins-hbase20:35711] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:05,964 INFO [RS:1;jenkins-hbase20:44619] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:05,964 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:05,965 INFO [RS:2;jenkins-hbase20:35711] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:05,965 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:05,965 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:05,965 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:05,965 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,46611,1689139023835] 2023-07-12 05:17:05,965 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44619,1689139024083] 2023-07-12 05:17:05,965 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,35711,1689139024278] 2023-07-12 05:17:05,968 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740 2023-07-12 05:17:05,968 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740 2023-07-12 05:17:05,981 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 05:17:05,984 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 05:17:05,985 DEBUG [RS:2;jenkins-hbase20:35711] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:05,985 DEBUG [RS:1;jenkins-hbase20:44619] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:05,985 DEBUG [RS:0;jenkins-hbase20:46611] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:05,986 DEBUG [RS:1;jenkins-hbase20:44619] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:05,986 DEBUG [RS:2;jenkins-hbase20:35711] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:05,986 DEBUG [RS:0;jenkins-hbase20:46611] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:05,986 DEBUG [RS:1;jenkins-hbase20:44619] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:05,986 DEBUG [RS:0;jenkins-hbase20:46611] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:05,986 DEBUG [RS:2;jenkins-hbase20:35711] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:05,991 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:05,993 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10985973280, jitterRate=0.023148491978645325}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 05:17:05,993 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 05:17:05,993 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 05:17:05,993 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 05:17:05,993 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 05:17:05,994 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 05:17:05,994 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 05:17:05,995 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:05,995 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 05:17:06,000 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:06,000 DEBUG [RS:2;jenkins-hbase20:35711] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:06,000 DEBUG [RS:1;jenkins-hbase20:44619] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:06,008 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:06,008 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 05:17:06,013 INFO [RS:0;jenkins-hbase20:46611] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:06,013 INFO [RS:1;jenkins-hbase20:44619] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:06,013 INFO [RS:2;jenkins-hbase20:35711] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:06,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 05:17:06,036 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 05:17:06,038 INFO [RS:2;jenkins-hbase20:35711] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:06,038 INFO [RS:0;jenkins-hbase20:46611] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:06,040 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 05:17:06,039 INFO [RS:1;jenkins-hbase20:44619] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:06,050 INFO [RS:0;jenkins-hbase20:46611] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:06,050 INFO [RS:2;jenkins-hbase20:35711] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:06,051 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,050 INFO [RS:1;jenkins-hbase20:44619] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:06,052 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,052 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,053 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:06,053 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:06,054 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:06,064 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,064 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,064 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,064 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,064 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,065 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:06,066 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:06,065 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,066 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:06,067 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:2;jenkins-hbase20:35711] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:0;jenkins-hbase20:46611] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,067 DEBUG [RS:1;jenkins-hbase20:44619] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:06,069 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,069 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,069 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,070 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,070 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,070 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,069 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,070 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,070 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,088 INFO [RS:1;jenkins-hbase20:44619] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:06,088 INFO [RS:0;jenkins-hbase20:46611] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:06,088 INFO [RS:2;jenkins-hbase20:35711] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:06,091 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46611,1689139023835-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,091 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35711,1689139024278-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,091 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44619,1689139024083-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,115 INFO [RS:1;jenkins-hbase20:44619] regionserver.Replication(203): jenkins-hbase20.apache.org,44619,1689139024083 started 2023-07-12 05:17:06,115 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44619,1689139024083, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44619, sessionid=0x1007f9c80ff0002 2023-07-12 05:17:06,118 INFO [RS:0;jenkins-hbase20:46611] regionserver.Replication(203): jenkins-hbase20.apache.org,46611,1689139023835 started 2023-07-12 05:17:06,119 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,46611,1689139023835, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:46611, sessionid=0x1007f9c80ff0001 2023-07-12 05:17:06,122 INFO [RS:2;jenkins-hbase20:35711] regionserver.Replication(203): jenkins-hbase20.apache.org,35711,1689139024278 started 2023-07-12 05:17:06,122 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:06,122 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:06,122 DEBUG [RS:1;jenkins-hbase20:44619] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:06,122 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,35711,1689139024278, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:35711, sessionid=0x1007f9c80ff0003 2023-07-12 05:17:06,123 DEBUG [RS:1;jenkins-hbase20:44619] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44619,1689139024083' 2023-07-12 05:17:06,123 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:06,122 DEBUG [RS:0;jenkins-hbase20:46611] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:06,123 DEBUG [RS:2;jenkins-hbase20:35711] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:06,123 DEBUG [RS:1;jenkins-hbase20:44619] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:06,124 DEBUG [RS:2;jenkins-hbase20:35711] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,35711,1689139024278' 2023-07-12 05:17:06,123 DEBUG [RS:0;jenkins-hbase20:46611] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46611,1689139023835' 2023-07-12 05:17:06,124 DEBUG [RS:2;jenkins-hbase20:35711] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:06,124 DEBUG [RS:0;jenkins-hbase20:46611] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:06,124 DEBUG [RS:1;jenkins-hbase20:44619] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:06,124 DEBUG [RS:0;jenkins-hbase20:46611] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:06,124 DEBUG [RS:2;jenkins-hbase20:35711] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:06,125 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:06,125 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:06,125 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:06,125 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:06,125 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:06,125 DEBUG [RS:1;jenkins-hbase20:44619] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:06,125 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:06,126 DEBUG [RS:0;jenkins-hbase20:46611] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:06,126 DEBUG [RS:0;jenkins-hbase20:46611] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46611,1689139023835' 2023-07-12 05:17:06,126 DEBUG [RS:0;jenkins-hbase20:46611] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:06,126 DEBUG [RS:1;jenkins-hbase20:44619] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44619,1689139024083' 2023-07-12 05:17:06,126 DEBUG [RS:1;jenkins-hbase20:44619] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:06,125 DEBUG [RS:2;jenkins-hbase20:35711] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:06,126 DEBUG [RS:2;jenkins-hbase20:35711] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,35711,1689139024278' 2023-07-12 05:17:06,126 DEBUG [RS:2;jenkins-hbase20:35711] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:06,126 DEBUG [RS:0;jenkins-hbase20:46611] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:06,126 DEBUG [RS:1;jenkins-hbase20:44619] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:06,127 DEBUG [RS:2;jenkins-hbase20:35711] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:06,127 DEBUG [RS:0;jenkins-hbase20:46611] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:06,127 INFO [RS:0;jenkins-hbase20:46611] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:06,127 INFO [RS:0;jenkins-hbase20:46611] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:06,127 DEBUG [RS:1;jenkins-hbase20:44619] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:06,127 DEBUG [RS:2;jenkins-hbase20:35711] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:06,127 INFO [RS:1;jenkins-hbase20:44619] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:06,127 INFO [RS:2;jenkins-hbase20:35711] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:06,128 INFO [RS:1;jenkins-hbase20:44619] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:06,128 INFO [RS:2;jenkins-hbase20:35711] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:06,192 DEBUG [jenkins-hbase20:41085] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 05:17:06,205 DEBUG [jenkins-hbase20:41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:06,207 DEBUG [jenkins-hbase20:41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:06,207 DEBUG [jenkins-hbase20:41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:06,207 DEBUG [jenkins-hbase20:41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:06,207 DEBUG [jenkins-hbase20:41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:06,211 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46611,1689139023835, state=OPENING 2023-07-12 05:17:06,231 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 05:17:06,241 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:06,242 INFO [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46611%2C1689139023835, suffix=, logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,46611,1689139023835, archiveDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs, maxLogs=32 2023-07-12 05:17:06,243 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 05:17:06,244 INFO [RS:2;jenkins-hbase20:35711] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C35711%2C1689139024278, suffix=, logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,35711,1689139024278, archiveDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs, maxLogs=32 2023-07-12 05:17:06,254 INFO [RS:1;jenkins-hbase20:44619] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44619%2C1689139024083, suffix=, logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,44619,1689139024083, archiveDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs, maxLogs=32 2023-07-12 05:17:06,290 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:06,321 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK] 2023-07-12 05:17:06,321 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK] 2023-07-12 05:17:06,321 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK] 2023-07-12 05:17:06,323 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK] 2023-07-12 05:17:06,323 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK] 2023-07-12 05:17:06,323 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK] 2023-07-12 05:17:06,334 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK] 2023-07-12 05:17:06,334 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK] 2023-07-12 05:17:06,335 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK] 2023-07-12 05:17:06,347 INFO [RS:2;jenkins-hbase20:35711] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,35711,1689139024278/jenkins-hbase20.apache.org%2C35711%2C1689139024278.1689139026249 2023-07-12 05:17:06,347 INFO [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,46611,1689139023835/jenkins-hbase20.apache.org%2C46611%2C1689139023835.1689139026255 2023-07-12 05:17:06,348 INFO [RS:1;jenkins-hbase20:44619] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,44619,1689139024083/jenkins-hbase20.apache.org%2C44619%2C1689139024083.1689139026256 2023-07-12 05:17:06,350 DEBUG [RS:2;jenkins-hbase20:35711] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK], DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK]] 2023-07-12 05:17:06,351 DEBUG [RS:1;jenkins-hbase20:44619] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK], DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK]] 2023-07-12 05:17:06,354 DEBUG [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK], DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK]] 2023-07-12 05:17:06,365 WARN [ReadOnlyZKClient-127.0.0.1:62508@0x35c80c30] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 05:17:06,397 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:06,400 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:06,401 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46611] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:59340 deadline: 1689139086401, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:06,507 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:06,512 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:06,519 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59344, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:06,537 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 05:17:06,538 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:06,544 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46611%2C1689139023835.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,46611,1689139023835, archiveDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs, maxLogs=32 2023-07-12 05:17:06,563 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK] 2023-07-12 05:17:06,564 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK] 2023-07-12 05:17:06,564 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK] 2023-07-12 05:17:06,577 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,46611,1689139023835/jenkins-hbase20.apache.org%2C46611%2C1689139023835.meta.1689139026546.meta 2023-07-12 05:17:06,578 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK]] 2023-07-12 05:17:06,578 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:06,580 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:06,585 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 05:17:06,587 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 05:17:06,596 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 05:17:06,596 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:06,596 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 05:17:06,596 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 05:17:06,599 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 05:17:06,604 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/info 2023-07-12 05:17:06,604 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/info 2023-07-12 05:17:06,605 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 05:17:06,606 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:06,606 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 05:17:06,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:06,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:06,610 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 05:17:06,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:06,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 05:17:06,613 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/table 2023-07-12 05:17:06,613 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/table 2023-07-12 05:17:06,614 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 05:17:06,615 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:06,617 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740 2023-07-12 05:17:06,620 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740 2023-07-12 05:17:06,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 05:17:06,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 05:17:06,636 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9513969440, jitterRate=-0.11394254863262177}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 05:17:06,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 05:17:06,659 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689139026503 2023-07-12 05:17:06,683 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 05:17:06,685 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 05:17:06,685 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46611,1689139023835, state=OPEN 2023-07-12 05:17:06,688 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 05:17:06,688 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 05:17:06,694 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 05:17:06,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46611,1689139023835 in 431 msec 2023-07-12 05:17:06,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 05:17:06,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 677 msec 2023-07-12 05:17:06,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1180 sec 2023-07-12 05:17:06,720 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689139026720, completionTime=-1 2023-07-12 05:17:06,720 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 05:17:06,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 05:17:06,781 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 05:17:06,781 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689139086781 2023-07-12 05:17:06,781 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689139146781 2023-07-12 05:17:06,781 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 61 msec 2023-07-12 05:17:06,818 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41085,1689139021900-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,818 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41085,1689139021900-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,819 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41085,1689139021900-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:41085, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:06,841 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 05:17:06,853 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 05:17:06,855 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:06,866 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 05:17:06,869 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:06,873 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:06,891 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:06,893 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8 empty. 2023-07-12 05:17:06,894 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:06,894 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 05:17:06,919 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:06,926 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 05:17:06,929 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:06,933 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:06,963 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:06,967 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 empty. 2023-07-12 05:17:06,969 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:06,969 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 05:17:06,970 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:06,972 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => aa9de83082eb73885ee3fc61a2c971d8, NAME => 'hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:07,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:07,000 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing aa9de83082eb73885ee3fc61a2c971d8, disabling compactions & flushes 2023-07-12 05:17:07,000 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:07,000 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,001 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,001 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. after waiting 0 ms 2023-07-12 05:17:07,001 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,001 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,001 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for aa9de83082eb73885ee3fc61a2c971d8: 2023-07-12 05:17:07,002 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 65a59a940eb599446f9a504f8dbf75d7, NAME => 'hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:07,009 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:07,027 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:07,028 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 65a59a940eb599446f9a504f8dbf75d7, disabling compactions & flushes 2023-07-12 05:17:07,028 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,028 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,028 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. after waiting 0 ms 2023-07-12 05:17:07,028 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,028 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,029 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:07,033 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:07,035 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139027034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139027034"}]},"ts":"1689139027034"} 2023-07-12 05:17:07,035 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139027014"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139027014"}]},"ts":"1689139027014"} 2023-07-12 05:17:07,066 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:07,068 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:07,069 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:07,071 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:07,074 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139027071"}]},"ts":"1689139027071"} 2023-07-12 05:17:07,074 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139027068"}]},"ts":"1689139027068"} 2023-07-12 05:17:07,081 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 05:17:07,084 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 05:17:07,086 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:07,087 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:07,087 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:07,087 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:07,087 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:07,088 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:07,089 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:07,089 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:07,089 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:07,089 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:07,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, ASSIGN}] 2023-07-12 05:17:07,089 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=aa9de83082eb73885ee3fc61a2c971d8, ASSIGN}] 2023-07-12 05:17:07,093 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, ASSIGN 2023-07-12 05:17:07,096 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=aa9de83082eb73885ee3fc61a2c971d8, ASSIGN 2023-07-12 05:17:07,097 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:07,101 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=aa9de83082eb73885ee3fc61a2c971d8, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:07,102 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 05:17:07,105 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:07,106 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=aa9de83082eb73885ee3fc61a2c971d8, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:07,106 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139027105"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139027105"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139027105"}]},"ts":"1689139027105"} 2023-07-12 05:17:07,106 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139027106"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139027106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139027106"}]},"ts":"1689139027106"} 2023-07-12 05:17:07,111 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:07,114 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure aa9de83082eb73885ee3fc61a2c971d8, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:07,266 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:07,267 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:07,271 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:07,283 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65a59a940eb599446f9a504f8dbf75d7, NAME => 'hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:07,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:07,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. service=MultiRowMutationService 2023-07-12 05:17:07,285 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 05:17:07,286 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa9de83082eb73885ee3fc61a2c971d8, NAME => 'hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:07,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,290 INFO [StoreOpener-aa9de83082eb73885ee3fc61a2c971d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,290 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,294 DEBUG [StoreOpener-aa9de83082eb73885ee3fc61a2c971d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/info 2023-07-12 05:17:07,294 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m 2023-07-12 05:17:07,294 DEBUG [StoreOpener-aa9de83082eb73885ee3fc61a2c971d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/info 2023-07-12 05:17:07,294 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m 2023-07-12 05:17:07,294 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65a59a940eb599446f9a504f8dbf75d7 columnFamilyName m 2023-07-12 05:17:07,295 INFO [StoreOpener-aa9de83082eb73885ee3fc61a2c971d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa9de83082eb73885ee3fc61a2c971d8 columnFamilyName info 2023-07-12 05:17:07,296 INFO [StoreOpener-aa9de83082eb73885ee3fc61a2c971d8-1] regionserver.HStore(310): Store=aa9de83082eb73885ee3fc61a2c971d8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:07,296 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(310): Store=65a59a940eb599446f9a504f8dbf75d7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:07,301 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,304 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:07,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:07,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:07,313 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened aa9de83082eb73885ee3fc61a2c971d8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10389397600, jitterRate=-0.03241194784641266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:07,314 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for aa9de83082eb73885ee3fc61a2c971d8: 2023-07-12 05:17:07,316 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8., pid=9, masterSystemTime=1689139027267 2023-07-12 05:17:07,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:07,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:07,322 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 65a59a940eb599446f9a504f8dbf75d7; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6661121a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:07,322 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:07,323 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=aa9de83082eb73885ee3fc61a2c971d8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:07,324 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139027322"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139027322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139027322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139027322"}]},"ts":"1689139027322"} 2023-07-12 05:17:07,324 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7., pid=8, masterSystemTime=1689139027266 2023-07-12 05:17:07,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,331 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:07,333 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:07,334 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139027333"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139027333"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139027333"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139027333"}]},"ts":"1689139027333"} 2023-07-12 05:17:07,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 05:17:07,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure aa9de83082eb73885ee3fc61a2c971d8, server=jenkins-hbase20.apache.org,46611,1689139023835 in 216 msec 2023-07-12 05:17:07,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 05:17:07,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,35711,1689139024278 in 228 msec 2023-07-12 05:17:07,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-12 05:17:07,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-12 05:17:07,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, ASSIGN in 264 msec 2023-07-12 05:17:07,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=aa9de83082eb73885ee3fc61a2c971d8, ASSIGN in 258 msec 2023-07-12 05:17:07,367 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:07,367 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:07,367 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139027367"}]},"ts":"1689139027367"} 2023-07-12 05:17:07,367 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139027367"}]},"ts":"1689139027367"} 2023-07-12 05:17:07,371 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 05:17:07,374 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 05:17:07,376 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 05:17:07,377 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:07,377 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:07,383 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:07,386 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:07,392 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 466 msec 2023-07-12 05:17:07,394 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 530 msec 2023-07-12 05:17:07,432 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:07,434 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56788, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:07,436 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 05:17:07,441 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 05:17:07,441 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 05:17:07,473 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:07,484 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 60 msec 2023-07-12 05:17:07,496 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 05:17:07,511 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:07,524 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 25 msec 2023-07-12 05:17:07,537 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 05:17:07,540 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 05:17:07,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.997sec 2023-07-12 05:17:07,544 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 05:17:07,546 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 05:17:07,546 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 05:17:07,550 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41085,1689139021900-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 05:17:07,551 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41085,1689139021900-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 05:17:07,561 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:07,561 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:07,567 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:07,573 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 05:17:07,592 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 05:17:07,597 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ReadOnlyZKClient(139): Connect 0x41587bac to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:07,612 DEBUG [Listener at localhost.localdomain/33317] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ccff66, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:07,639 DEBUG [hconnection-0x61c28258-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:07,654 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59360, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:07,670 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:07,672 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:07,686 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 05:17:07,692 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 05:17:07,707 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 05:17:07,707 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:07,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-12 05:17:07,717 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ReadOnlyZKClient(139): Connect 0x4d2f15cb to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:07,727 DEBUG [Listener at localhost.localdomain/33317] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2cd1b0c2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:07,728 INFO [Listener at localhost.localdomain/33317] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62508 2023-07-12 05:17:07,735 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:07,736 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1007f9c80ff000a connected 2023-07-12 05:17:07,782 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=422, OpenFileDescriptor=695, MaxFileDescriptor=60000, SystemLoadAverage=669, ProcessCount=170, AvailableMemoryMB=3472 2023-07-12 05:17:07,785 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-12 05:17:07,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:07,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:07,891 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 05:17:07,905 INFO [Listener at localhost.localdomain/33317] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:07,905 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:07,906 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:07,906 INFO [Listener at localhost.localdomain/33317] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:07,906 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:07,906 INFO [Listener at localhost.localdomain/33317] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:07,906 INFO [Listener at localhost.localdomain/33317] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:07,911 INFO [Listener at localhost.localdomain/33317] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38695 2023-07-12 05:17:07,912 INFO [Listener at localhost.localdomain/33317] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:07,914 DEBUG [Listener at localhost.localdomain/33317] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:07,916 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:07,917 INFO [Listener at localhost.localdomain/33317] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:07,919 INFO [Listener at localhost.localdomain/33317] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38695 connecting to ZooKeeper ensemble=127.0.0.1:62508 2023-07-12 05:17:07,958 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:386950x0, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:07,960 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(162): regionserver:386950x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:07,961 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(162): regionserver:386950x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 05:17:07,962 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ZKUtil(164): regionserver:386950x0, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:07,972 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38695-0x1007f9c80ff000b connected 2023-07-12 05:17:07,972 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38695 2023-07-12 05:17:07,972 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38695 2023-07-12 05:17:07,973 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38695 2023-07-12 05:17:07,978 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38695 2023-07-12 05:17:07,979 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38695 2023-07-12 05:17:07,982 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:07,982 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:07,982 INFO [Listener at localhost.localdomain/33317] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:07,983 INFO [Listener at localhost.localdomain/33317] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:07,983 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:07,984 INFO [Listener at localhost.localdomain/33317] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:07,984 INFO [Listener at localhost.localdomain/33317] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:07,985 INFO [Listener at localhost.localdomain/33317] http.HttpServer(1146): Jetty bound to port 36495 2023-07-12 05:17:07,985 INFO [Listener at localhost.localdomain/33317] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:07,989 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:07,990 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b09697f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:07,990 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:07,990 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a356b4f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:08,099 INFO [Listener at localhost.localdomain/33317] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:08,100 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:08,100 INFO [Listener at localhost.localdomain/33317] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:08,100 INFO [Listener at localhost.localdomain/33317] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 05:17:08,102 INFO [Listener at localhost.localdomain/33317] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:08,103 INFO [Listener at localhost.localdomain/33317] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@8fffd09{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/java.io.tmpdir/jetty-0_0_0_0-36495-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3807001850675961771/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:08,105 INFO [Listener at localhost.localdomain/33317] server.AbstractConnector(333): Started ServerConnector@7b5edea0{HTTP/1.1, (http/1.1)}{0.0.0.0:36495} 2023-07-12 05:17:08,105 INFO [Listener at localhost.localdomain/33317] server.Server(415): Started @12035ms 2023-07-12 05:17:08,113 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(951): ClusterId : f5ed9019-a7ce-4a38-a899-5fd3bcd29e63 2023-07-12 05:17:08,117 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:08,119 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:08,119 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:08,121 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:08,123 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ReadOnlyZKClient(139): Connect 0x2aff626f to 127.0.0.1:62508 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:08,137 DEBUG [RS:3;jenkins-hbase20:38695] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24388940, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:08,138 DEBUG [RS:3;jenkins-hbase20:38695] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2007091f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:08,148 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:38695 2023-07-12 05:17:08,148 INFO [RS:3;jenkins-hbase20:38695] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:08,148 INFO [RS:3;jenkins-hbase20:38695] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:08,148 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:08,149 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41085,1689139021900 with isa=jenkins-hbase20.apache.org/148.251.75.209:38695, startcode=1689139027905 2023-07-12 05:17:08,149 DEBUG [RS:3;jenkins-hbase20:38695] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:08,155 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:37823, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:08,155 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,156 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:08,157 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e 2023-07-12 05:17:08,157 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:35039 2023-07-12 05:17:08,157 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46839 2023-07-12 05:17:08,160 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:08,160 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:08,160 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:08,160 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:08,161 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ZKUtil(162): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,161 WARN [RS:3;jenkins-hbase20:38695] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:08,161 INFO [RS:3;jenkins-hbase20:38695] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:08,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,162 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,162 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38695,1689139027905] 2023-07-12 05:17:08,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,162 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:08,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:08,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:08,163 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:08,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:08,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:08,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:08,178 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41085,1689139021900] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 05:17:08,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:08,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:08,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:08,181 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:08,186 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ZKUtil(162): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,186 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ZKUtil(162): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:08,187 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ZKUtil(162): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:08,187 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ZKUtil(162): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:08,188 DEBUG [RS:3;jenkins-hbase20:38695] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:08,189 INFO [RS:3;jenkins-hbase20:38695] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:08,195 INFO [RS:3;jenkins-hbase20:38695] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:08,195 INFO [RS:3;jenkins-hbase20:38695] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:08,195 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:08,198 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:08,201 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:08,201 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,201 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,201 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,201 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,201 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,202 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:08,202 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,202 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,202 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,202 DEBUG [RS:3;jenkins-hbase20:38695] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:08,204 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:08,204 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:08,204 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:08,217 INFO [RS:3;jenkins-hbase20:38695] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:08,217 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38695,1689139027905-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:08,230 INFO [RS:3;jenkins-hbase20:38695] regionserver.Replication(203): jenkins-hbase20.apache.org,38695,1689139027905 started 2023-07-12 05:17:08,230 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38695,1689139027905, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38695, sessionid=0x1007f9c80ff000b 2023-07-12 05:17:08,231 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:08,231 DEBUG [RS:3;jenkins-hbase20:38695] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,231 DEBUG [RS:3;jenkins-hbase20:38695] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38695,1689139027905' 2023-07-12 05:17:08,231 DEBUG [RS:3;jenkins-hbase20:38695] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:08,231 DEBUG [RS:3;jenkins-hbase20:38695] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:08,232 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:08,232 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:08,232 DEBUG [RS:3;jenkins-hbase20:38695] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:08,232 DEBUG [RS:3;jenkins-hbase20:38695] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38695,1689139027905' 2023-07-12 05:17:08,232 DEBUG [RS:3;jenkins-hbase20:38695] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:08,233 DEBUG [RS:3;jenkins-hbase20:38695] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:08,233 DEBUG [RS:3;jenkins-hbase20:38695] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:08,233 INFO [RS:3;jenkins-hbase20:38695] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:08,233 INFO [RS:3;jenkins-hbase20:38695] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:08,239 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:08,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:08,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:08,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:08,252 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:08,255 DEBUG [hconnection-0x326ecb32-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:08,264 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59374, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:08,270 DEBUG [hconnection-0x326ecb32-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:08,274 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56798, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:08,277 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:08,277 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:08,288 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:08,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:08,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:54108 deadline: 1689140228286, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:08,289 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:08,293 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:08,294 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:08,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:08,295 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:08,302 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:08,302 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:08,304 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:08,304 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:08,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:08,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:08,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:08,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:08,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:08,316 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:08,320 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:08,321 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:08,325 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:08,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:08,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:08,334 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(238): Moving server region 65a59a940eb599446f9a504f8dbf75d7, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:08,336 INFO [RS:3;jenkins-hbase20:38695] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38695%2C1689139027905, suffix=, logDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,38695,1689139027905, archiveDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs, maxLogs=32 2023-07-12 05:17:08,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE 2023-07-12 05:17:08,340 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 05:17:08,342 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE 2023-07-12 05:17:08,344 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:08,344 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139028344"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139028344"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139028344"}]},"ts":"1689139028344"} 2023-07-12 05:17:08,348 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:08,396 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK] 2023-07-12 05:17:08,400 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK] 2023-07-12 05:17:08,401 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK] 2023-07-12 05:17:08,408 INFO [RS:3;jenkins-hbase20:38695] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/WALs/jenkins-hbase20.apache.org,38695,1689139027905/jenkins-hbase20.apache.org%2C38695%2C1689139027905.1689139028337 2023-07-12 05:17:08,409 DEBUG [RS:3;jenkins-hbase20:38695] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43159,DS-132c0fde-2522-4404-840d-733da76c03a3,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-a7b21aef-a4e0-4f10-8508-de6bb6ca8181,DISK], DatanodeInfoWithStorage[127.0.0.1:36333,DS-3041c203-4d72-4c75-a425-decdb827eb6e,DISK]] 2023-07-12 05:17:08,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:08,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 65a59a940eb599446f9a504f8dbf75d7, disabling compactions & flushes 2023-07-12 05:17:08,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:08,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:08,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. after waiting 0 ms 2023-07-12 05:17:08,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:08,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 65a59a940eb599446f9a504f8dbf75d7 1/1 column families, dataSize=1.40 KB heapSize=2.39 KB 2023-07-12 05:17:08,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.40 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/.tmp/m/e2b09ff2dfa141dfa423e74ff8b122c6 2023-07-12 05:17:08,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/.tmp/m/e2b09ff2dfa141dfa423e74ff8b122c6 as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/e2b09ff2dfa141dfa423e74ff8b122c6 2023-07-12 05:17:08,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/e2b09ff2dfa141dfa423e74ff8b122c6, entries=3, sequenceid=9, filesize=5.2 K 2023-07-12 05:17:08,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.40 KB/1433, heapSize ~2.38 KB/2432, currentSize=0 B/0 for 65a59a940eb599446f9a504f8dbf75d7 in 215ms, sequenceid=9, compaction requested=false 2023-07-12 05:17:08,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 05:17:08,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 05:17:08,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:08,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:08,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:08,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 65a59a940eb599446f9a504f8dbf75d7 move to jenkins-hbase20.apache.org,44619,1689139024083 record at close sequenceid=9 2023-07-12 05:17:08,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:08,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=CLOSED 2023-07-12 05:17:08,776 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139028775"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139028775"}]},"ts":"1689139028775"} 2023-07-12 05:17:08,782 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 05:17:08,782 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,35711,1689139024278 in 430 msec 2023-07-12 05:17:08,783 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:08,934 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:08,934 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:08,934 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139028934"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139028934"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139028934"}]},"ts":"1689139028934"} 2023-07-12 05:17:08,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:09,093 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:09,093 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:09,098 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52718, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:09,104 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:09,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65a59a940eb599446f9a504f8dbf75d7, NAME => 'hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:09,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:09,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. service=MultiRowMutationService 2023-07-12 05:17:09,105 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 05:17:09,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:09,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,107 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,109 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m 2023-07-12 05:17:09,109 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m 2023-07-12 05:17:09,110 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65a59a940eb599446f9a504f8dbf75d7 columnFamilyName m 2023-07-12 05:17:09,124 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/e2b09ff2dfa141dfa423e74ff8b122c6 2023-07-12 05:17:09,125 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(310): Store=65a59a940eb599446f9a504f8dbf75d7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:09,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,130 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:09,136 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 65a59a940eb599446f9a504f8dbf75d7; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3c37dde6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:09,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:09,137 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7., pid=14, masterSystemTime=1689139029093 2023-07-12 05:17:09,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:09,144 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:09,145 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:09,145 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139029145"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139029145"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139029145"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139029145"}]},"ts":"1689139029145"} 2023-07-12 05:17:09,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 05:17:09,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,44619,1689139024083 in 210 msec 2023-07-12 05:17:09,162 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE in 816 msec 2023-07-12 05:17:09,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 05:17:09,341 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to default 2023-07-12 05:17:09,341 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:09,342 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:09,344 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35711] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 148.251.75.209:56798 deadline: 1689139089343, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44619 startCode=1689139024083. As of locationSeqNum=9. 2023-07-12 05:17:09,449 DEBUG [hconnection-0x326ecb32-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:09,456 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52726, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:09,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:09,481 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:09,485 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:09,485 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:09,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:09,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:09,501 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:09,503 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35711] ipc.CallRunner(144): callId: 43 service: ClientService methodName: ExecService size: 624 connection: 148.251.75.209:56788 deadline: 1689139089503, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44619 startCode=1689139024083. As of locationSeqNum=9. 2023-07-12 05:17:09,506 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-12 05:17:09,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:09,608 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:09,610 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52728, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:09,616 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:09,617 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:09,618 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:09,619 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:09,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:09,626 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:09,632 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:09,633 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:09,633 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:09,632 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:09,633 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:09,634 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 empty. 2023-07-12 05:17:09,634 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e empty. 2023-07-12 05:17:09,634 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 empty. 2023-07-12 05:17:09,634 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b empty. 2023-07-12 05:17:09,634 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe empty. 2023-07-12 05:17:09,634 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:09,635 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:09,635 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:09,636 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:09,636 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:09,636 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 05:17:09,671 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:09,673 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b693bcc3caf831949852e2d13444fbb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:09,679 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 64d6812eff3fb9c425bb88837aea8f91, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:09,679 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7dfb5663eb15b58f699747f20ce07bbe, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:09,791 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:09,792 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 64d6812eff3fb9c425bb88837aea8f91, disabling compactions & flushes 2023-07-12 05:17:09,793 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:09,793 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b693bcc3caf831949852e2d13444fbb5, disabling compactions & flushes 2023-07-12 05:17:09,793 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:09,793 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:09,794 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. after waiting 0 ms 2023-07-12 05:17:09,794 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:09,794 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:09,794 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b693bcc3caf831949852e2d13444fbb5: 2023-07-12 05:17:09,794 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 61f1866a9220de80031591e97bf6f03b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:09,793 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:09,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:09,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. after waiting 0 ms 2023-07-12 05:17:09,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:09,795 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:09,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 64d6812eff3fb9c425bb88837aea8f91: 2023-07-12 05:17:09,796 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f15ccd177424d9cbb896a9f34dc0202e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:09,796 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:09,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7dfb5663eb15b58f699747f20ce07bbe, disabling compactions & flushes 2023-07-12 05:17:09,797 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:09,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:09,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. after waiting 0 ms 2023-07-12 05:17:09,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:09,797 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:09,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7dfb5663eb15b58f699747f20ce07bbe: 2023-07-12 05:17:09,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:09,856 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:09,857 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f15ccd177424d9cbb896a9f34dc0202e, disabling compactions & flushes 2023-07-12 05:17:09,857 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:09,857 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:09,857 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. after waiting 0 ms 2023-07-12 05:17:09,857 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:09,857 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:09,857 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f15ccd177424d9cbb896a9f34dc0202e: 2023-07-12 05:17:09,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:09,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 61f1866a9220de80031591e97bf6f03b, disabling compactions & flushes 2023-07-12 05:17:09,858 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:09,859 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:09,859 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. after waiting 0 ms 2023-07-12 05:17:09,859 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:09,859 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:09,859 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 61f1866a9220de80031591e97bf6f03b: 2023-07-12 05:17:09,864 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:09,865 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139029865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139029865"}]},"ts":"1689139029865"} 2023-07-12 05:17:09,865 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139029865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139029865"}]},"ts":"1689139029865"} 2023-07-12 05:17:09,865 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139029865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139029865"}]},"ts":"1689139029865"} 2023-07-12 05:17:09,866 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139029865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139029865"}]},"ts":"1689139029865"} 2023-07-12 05:17:09,866 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139029865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139029865"}]},"ts":"1689139029865"} 2023-07-12 05:17:09,919 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 05:17:09,921 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:09,921 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139029921"}]},"ts":"1689139029921"} 2023-07-12 05:17:09,923 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 05:17:09,929 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:09,930 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:09,930 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:09,930 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:09,930 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, ASSIGN}] 2023-07-12 05:17:09,934 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, ASSIGN 2023-07-12 05:17:09,934 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, ASSIGN 2023-07-12 05:17:09,935 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, ASSIGN 2023-07-12 05:17:09,935 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, ASSIGN 2023-07-12 05:17:09,936 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:09,937 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, ASSIGN 2023-07-12 05:17:09,937 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:09,937 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:09,937 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:09,938 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:10,087 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 05:17:10,093 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:10,093 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:10,093 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030093"}]},"ts":"1689139030093"} 2023-07-12 05:17:10,093 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030093"}]},"ts":"1689139030093"} 2023-07-12 05:17:10,094 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,094 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030094"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030094"}]},"ts":"1689139030094"} 2023-07-12 05:17:10,094 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,094 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,095 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030094"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030094"}]},"ts":"1689139030094"} 2023-07-12 05:17:10,095 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030094"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030094"}]},"ts":"1689139030094"} 2023-07-12 05:17:10,097 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=18, state=RUNNABLE; OpenRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:10,104 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=20, state=RUNNABLE; OpenRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:10,109 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=17, state=RUNNABLE; OpenRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:10,115 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=16, state=RUNNABLE; OpenRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:10,116 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=19, state=RUNNABLE; OpenRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:10,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:10,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f15ccd177424d9cbb896a9f34dc0202e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 05:17:10,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:10,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,269 INFO [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,272 DEBUG [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/f 2023-07-12 05:17:10,272 DEBUG [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/f 2023-07-12 05:17:10,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b693bcc3caf831949852e2d13444fbb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 05:17:10,273 INFO [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f15ccd177424d9cbb896a9f34dc0202e columnFamilyName f 2023-07-12 05:17:10,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:10,274 INFO [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] regionserver.HStore(310): Store=f15ccd177424d9cbb896a9f34dc0202e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:10,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,281 INFO [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,283 DEBUG [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/f 2023-07-12 05:17:10,283 DEBUG [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/f 2023-07-12 05:17:10,284 INFO [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b693bcc3caf831949852e2d13444fbb5 columnFamilyName f 2023-07-12 05:17:10,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,285 INFO [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] regionserver.HStore(310): Store=b693bcc3caf831949852e2d13444fbb5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:10,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:10,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f15ccd177424d9cbb896a9f34dc0202e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9754761760, jitterRate=-0.09151701629161835}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:10,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f15ccd177424d9cbb896a9f34dc0202e: 2023-07-12 05:17:10,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e., pid=22, masterSystemTime=1689139030260 2023-07-12 05:17:10,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64d6812eff3fb9c425bb88837aea8f91, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 05:17:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,295 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:10,295 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030295"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139030295"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139030295"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139030295"}]},"ts":"1689139030295"} 2023-07-12 05:17:10,298 INFO [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:10,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened b693bcc3caf831949852e2d13444fbb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11293201760, jitterRate=0.05176137387752533}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:10,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for b693bcc3caf831949852e2d13444fbb5: 2023-07-12 05:17:10,301 DEBUG [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/f 2023-07-12 05:17:10,302 DEBUG [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/f 2023-07-12 05:17:10,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5., pid=24, masterSystemTime=1689139030267 2023-07-12 05:17:10,302 INFO [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64d6812eff3fb9c425bb88837aea8f91 columnFamilyName f 2023-07-12 05:17:10,304 INFO [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] regionserver.HStore(310): Store=64d6812eff3fb9c425bb88837aea8f91/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:10,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61f1866a9220de80031591e97bf6f03b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 05:17:10,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:10,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=20 2023-07-12 05:17:10,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=20, state=SUCCESS; OpenRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,46611,1689139023835 in 196 msec 2023-07-12 05:17:10,307 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,307 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030307"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139030307"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139030307"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139030307"}]},"ts":"1689139030307"} 2023-07-12 05:17:10,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,310 INFO [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,313 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, ASSIGN in 377 msec 2023-07-12 05:17:10,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,314 DEBUG [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/f 2023-07-12 05:17:10,314 DEBUG [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/f 2023-07-12 05:17:10,315 INFO [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61f1866a9220de80031591e97bf6f03b columnFamilyName f 2023-07-12 05:17:10,316 INFO [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] regionserver.HStore(310): Store=61f1866a9220de80031591e97bf6f03b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:10,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=16 2023-07-12 05:17:10,319 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=16, state=SUCCESS; OpenRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,44619,1689139024083 in 196 msec 2023-07-12 05:17:10,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, ASSIGN in 389 msec 2023-07-12 05:17:10,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:10,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 64d6812eff3fb9c425bb88837aea8f91; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10268615040, jitterRate=-0.04366070032119751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:10,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 64d6812eff3fb9c425bb88837aea8f91: 2023-07-12 05:17:10,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91., pid=21, masterSystemTime=1689139030260 2023-07-12 05:17:10,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,329 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,330 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:10,330 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030330"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139030330"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139030330"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139030330"}]},"ts":"1689139030330"} 2023-07-12 05:17:10,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:10,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 61f1866a9220de80031591e97bf6f03b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9514295680, jitterRate=-0.11391216516494751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:10,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 61f1866a9220de80031591e97bf6f03b: 2023-07-12 05:17:10,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b., pid=25, masterSystemTime=1689139030267 2023-07-12 05:17:10,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=18 2023-07-12 05:17:10,337 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; OpenRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,46611,1689139023835 in 235 msec 2023-07-12 05:17:10,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,337 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,337 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7dfb5663eb15b58f699747f20ce07bbe, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 05:17:10,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:10,338 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,339 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030338"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139030338"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139030338"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139030338"}]},"ts":"1689139030338"} 2023-07-12 05:17:10,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, ASSIGN in 407 msec 2023-07-12 05:17:10,345 INFO [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,349 DEBUG [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/f 2023-07-12 05:17:10,349 DEBUG [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/f 2023-07-12 05:17:10,350 INFO [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7dfb5663eb15b58f699747f20ce07bbe columnFamilyName f 2023-07-12 05:17:10,351 INFO [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] regionserver.HStore(310): Store=7dfb5663eb15b58f699747f20ce07bbe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:10,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=19 2023-07-12 05:17:10,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=19, state=SUCCESS; OpenRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,44619,1689139024083 in 229 msec 2023-07-12 05:17:10,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,356 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, ASSIGN in 423 msec 2023-07-12 05:17:10,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:10,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7dfb5663eb15b58f699747f20ce07bbe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9441592800, jitterRate=-0.12068314850330353}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:10,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7dfb5663eb15b58f699747f20ce07bbe: 2023-07-12 05:17:10,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe., pid=23, masterSystemTime=1689139030267 2023-07-12 05:17:10,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,367 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,368 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,368 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030368"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139030368"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139030368"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139030368"}]},"ts":"1689139030368"} 2023-07-12 05:17:10,373 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=17 2023-07-12 05:17:10,373 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=17, state=SUCCESS; OpenRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,44619,1689139024083 in 261 msec 2023-07-12 05:17:10,376 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-12 05:17:10,377 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, ASSIGN in 443 msec 2023-07-12 05:17:10,378 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:10,378 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139030378"}]},"ts":"1689139030378"} 2023-07-12 05:17:10,380 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 05:17:10,382 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:10,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 887 msec 2023-07-12 05:17:10,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:10,636 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-12 05:17:10,637 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-12 05:17:10,639 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:10,647 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-12 05:17:10,648 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:10,648 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-12 05:17:10,648 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:10,653 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:10,656 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33416, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:10,659 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:10,663 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:10,664 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:10,668 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52736, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:10,670 DEBUG [Listener at localhost.localdomain/33317] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:10,674 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58576, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:10,685 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:10,685 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:10,686 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,698 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:10,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:10,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:10,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region b693bcc3caf831949852e2d13444fbb5 to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:10,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:10,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:10,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, REOPEN/MOVE 2023-07-12 05:17:10,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 7dfb5663eb15b58f699747f20ce07bbe to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,717 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, REOPEN/MOVE 2023-07-12 05:17:10,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:10,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:10,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:10,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:10,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:10,719 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030719"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030719"}]},"ts":"1689139030719"} 2023-07-12 05:17:10,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, REOPEN/MOVE 2023-07-12 05:17:10,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 64d6812eff3fb9c425bb88837aea8f91 to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:10,723 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, REOPEN/MOVE 2023-07-12 05:17:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:10,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:10,726 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,728 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030726"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030726"}]},"ts":"1689139030726"} 2023-07-12 05:17:10,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, REOPEN/MOVE 2023-07-12 05:17:10,729 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, REOPEN/MOVE 2023-07-12 05:17:10,731 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:10,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=27, state=RUNNABLE; CloseRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:10,732 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030731"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030731"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030731"}]},"ts":"1689139030731"} 2023-07-12 05:17:10,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 61f1866a9220de80031591e97bf6f03b to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:10,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:10,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:10,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:10,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:10,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=28, state=RUNNABLE; CloseRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:10,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, REOPEN/MOVE 2023-07-12 05:17:10,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region f15ccd177424d9cbb896a9f34dc0202e to RSGroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:10,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:10,740 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, REOPEN/MOVE 2023-07-12 05:17:10,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:10,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:10,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:10,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:10,742 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:10,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, REOPEN/MOVE 2023-07-12 05:17:10,742 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030742"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030742"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030742"}]},"ts":"1689139030742"} 2023-07-12 05:17:10,743 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, REOPEN/MOVE 2023-07-12 05:17:10,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_781802648, current retry=0 2023-07-12 05:17:10,745 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:10,746 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030745"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139030745"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139030745"}]},"ts":"1689139030745"} 2023-07-12 05:17:10,746 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=32, state=RUNNABLE; CloseRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:10,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE; CloseRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:10,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 61f1866a9220de80031591e97bf6f03b, disabling compactions & flushes 2023-07-12 05:17:10,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. after waiting 0 ms 2023-07-12 05:17:10,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,888 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:10,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:10,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 61f1866a9220de80031591e97bf6f03b: 2023-07-12 05:17:10,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 61f1866a9220de80031591e97bf6f03b move to jenkins-hbase20.apache.org,38695,1689139027905 record at close sequenceid=2 2023-07-12 05:17:10,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:10,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7dfb5663eb15b58f699747f20ce07bbe, disabling compactions & flushes 2023-07-12 05:17:10,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. after waiting 0 ms 2023-07-12 05:17:10,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f15ccd177424d9cbb896a9f34dc0202e, disabling compactions & flushes 2023-07-12 05:17:10,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. after waiting 0 ms 2023-07-12 05:17:10,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,895 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=CLOSED 2023-07-12 05:17:10,895 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030895"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139030895"}]},"ts":"1689139030895"} 2023-07-12 05:17:10,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=32 2023-07-12 05:17:10,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=32, state=SUCCESS; CloseRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,44619,1689139024083 in 151 msec 2023-07-12 05:17:10,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:10,905 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38695,1689139027905; forceNewPlan=false, retain=false 2023-07-12 05:17:10,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:10,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:10,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f15ccd177424d9cbb896a9f34dc0202e: 2023-07-12 05:17:10,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f15ccd177424d9cbb896a9f34dc0202e move to jenkins-hbase20.apache.org,35711,1689139024278 record at close sequenceid=2 2023-07-12 05:17:10,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:10,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7dfb5663eb15b58f699747f20ce07bbe: 2023-07-12 05:17:10,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 7dfb5663eb15b58f699747f20ce07bbe move to jenkins-hbase20.apache.org,35711,1689139024278 record at close sequenceid=2 2023-07-12 05:17:10,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:10,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 64d6812eff3fb9c425bb88837aea8f91, disabling compactions & flushes 2023-07-12 05:17:10,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,911 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=CLOSED 2023-07-12 05:17:10,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. after waiting 0 ms 2023-07-12 05:17:10,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,911 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030911"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139030911"}]},"ts":"1689139030911"} 2023-07-12 05:17:10,912 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=CLOSED 2023-07-12 05:17:10,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:10,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,912 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030912"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139030912"}]},"ts":"1689139030912"} 2023-07-12 05:17:10,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing b693bcc3caf831949852e2d13444fbb5, disabling compactions & flushes 2023-07-12 05:17:10,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. after waiting 0 ms 2023-07-12 05:17:10,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:10,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:10,923 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 64d6812eff3fb9c425bb88837aea8f91: 2023-07-12 05:17:10,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 64d6812eff3fb9c425bb88837aea8f91 move to jenkins-hbase20.apache.org,38695,1689139027905 record at close sequenceid=2 2023-07-12 05:17:10,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:10,927 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=33 2023-07-12 05:17:10,927 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=33, state=SUCCESS; CloseRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,46611,1689139023835 in 168 msec 2023-07-12 05:17:10,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:10,928 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=27 2023-07-12 05:17:10,928 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=27, state=SUCCESS; CloseRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,44619,1689139024083 in 191 msec 2023-07-12 05:17:10,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:10,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for b693bcc3caf831949852e2d13444fbb5: 2023-07-12 05:17:10,928 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=CLOSED 2023-07-12 05:17:10,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding b693bcc3caf831949852e2d13444fbb5 move to jenkins-hbase20.apache.org,35711,1689139024278 record at close sequenceid=2 2023-07-12 05:17:10,929 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139030928"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139030928"}]},"ts":"1689139030928"} 2023-07-12 05:17:10,929 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:10,930 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:10,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:10,937 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=CLOSED 2023-07-12 05:17:10,937 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139030937"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139030937"}]},"ts":"1689139030937"} 2023-07-12 05:17:10,943 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-12 05:17:10,943 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,44619,1689139024083 in 215 msec 2023-07-12 05:17:10,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=28 2023-07-12 05:17:10,944 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:10,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=28, state=SUCCESS; CloseRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,46611,1689139023835 in 205 msec 2023-07-12 05:17:10,946 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38695,1689139027905; forceNewPlan=false, retain=false 2023-07-12 05:17:11,056 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 05:17:11,056 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,056 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,056 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,056 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031056"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031056"}]},"ts":"1689139031056"} 2023-07-12 05:17:11,056 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,057 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031056"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031056"}]},"ts":"1689139031056"} 2023-07-12 05:17:11,057 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031056"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031056"}]},"ts":"1689139031056"} 2023-07-12 05:17:11,056 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,057 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031056"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031056"}]},"ts":"1689139031056"} 2023-07-12 05:17:11,057 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031056"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031056"}]},"ts":"1689139031056"} 2023-07-12 05:17:11,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:11,068 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; OpenRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:11,071 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=28, state=RUNNABLE; OpenRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:11,076 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=27, state=RUNNABLE; OpenRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:11,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=26, state=RUNNABLE; OpenRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:11,215 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,215 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:11,217 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:11,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64d6812eff3fb9c425bb88837aea8f91, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 05:17:11,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:11,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,229 INFO [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,231 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7dfb5663eb15b58f699747f20ce07bbe, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 05:17:11,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,232 DEBUG [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/f 2023-07-12 05:17:11,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:11,232 DEBUG [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/f 2023-07-12 05:17:11,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,235 INFO [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64d6812eff3fb9c425bb88837aea8f91 columnFamilyName f 2023-07-12 05:17:11,235 INFO [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,237 INFO [StoreOpener-64d6812eff3fb9c425bb88837aea8f91-1] regionserver.HStore(310): Store=64d6812eff3fb9c425bb88837aea8f91/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:11,238 DEBUG [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/f 2023-07-12 05:17:11,238 DEBUG [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/f 2023-07-12 05:17:11,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,239 INFO [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7dfb5663eb15b58f699747f20ce07bbe columnFamilyName f 2023-07-12 05:17:11,240 INFO [StoreOpener-7dfb5663eb15b58f699747f20ce07bbe-1] regionserver.HStore(310): Store=7dfb5663eb15b58f699747f20ce07bbe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:11,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 64d6812eff3fb9c425bb88837aea8f91; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11269264160, jitterRate=0.04953201115131378}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:11,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 64d6812eff3fb9c425bb88837aea8f91: 2023-07-12 05:17:11,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91., pid=38, masterSystemTime=1689139031215 2023-07-12 05:17:11,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7dfb5663eb15b58f699747f20ce07bbe; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11559388640, jitterRate=0.07655195891857147}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:11,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7dfb5663eb15b58f699747f20ce07bbe: 2023-07-12 05:17:11,254 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe., pid=39, masterSystemTime=1689139031224 2023-07-12 05:17:11,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,256 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031256"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139031256"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139031256"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139031256"}]},"ts":"1689139031256"} 2023-07-12 05:17:11,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61f1866a9220de80031591e97bf6f03b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 05:17:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:11,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,261 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,262 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031260"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139031260"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139031260"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139031260"}]},"ts":"1689139031260"} 2023-07-12 05:17:11,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b693bcc3caf831949852e2d13444fbb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 05:17:11,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:11,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,263 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=28 2023-07-12 05:17:11,263 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=28, state=SUCCESS; OpenRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,38695,1689139027905 in 188 msec 2023-07-12 05:17:11,266 INFO [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,266 INFO [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,268 DEBUG [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/f 2023-07-12 05:17:11,268 DEBUG [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/f 2023-07-12 05:17:11,268 DEBUG [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/f 2023-07-12 05:17:11,268 DEBUG [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/f 2023-07-12 05:17:11,269 INFO [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b693bcc3caf831949852e2d13444fbb5 columnFamilyName f 2023-07-12 05:17:11,271 INFO [StoreOpener-b693bcc3caf831949852e2d13444fbb5-1] regionserver.HStore(310): Store=b693bcc3caf831949852e2d13444fbb5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:11,271 INFO [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61f1866a9220de80031591e97bf6f03b columnFamilyName f 2023-07-12 05:17:11,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, REOPEN/MOVE in 540 msec 2023-07-12 05:17:11,274 INFO [StoreOpener-61f1866a9220de80031591e97bf6f03b-1] regionserver.HStore(310): Store=61f1866a9220de80031591e97bf6f03b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:11,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,275 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=27 2023-07-12 05:17:11,275 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=27, state=SUCCESS; OpenRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,35711,1689139024278 in 189 msec 2023-07-12 05:17:11,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,279 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, REOPEN/MOVE in 557 msec 2023-07-12 05:17:11,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened b693bcc3caf831949852e2d13444fbb5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9663493280, jitterRate=-0.10001705586910248}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:11,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for b693bcc3caf831949852e2d13444fbb5: 2023-07-12 05:17:11,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5., pid=40, masterSystemTime=1689139031224 2023-07-12 05:17:11,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:11,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f15ccd177424d9cbb896a9f34dc0202e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 05:17:11,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:11,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,285 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 61f1866a9220de80031591e97bf6f03b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10814372960, jitterRate=0.007166966795921326}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:11,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 61f1866a9220de80031591e97bf6f03b: 2023-07-12 05:17:11,286 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b., pid=36, masterSystemTime=1689139031215 2023-07-12 05:17:11,286 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,286 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031286"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139031286"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139031286"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139031286"}]},"ts":"1689139031286"} 2023-07-12 05:17:11,287 INFO [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:11,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:11,290 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,290 DEBUG [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/f 2023-07-12 05:17:11,290 DEBUG [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/f 2023-07-12 05:17:11,290 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031290"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139031290"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139031290"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139031290"}]},"ts":"1689139031290"} 2023-07-12 05:17:11,291 INFO [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f15ccd177424d9cbb896a9f34dc0202e columnFamilyName f 2023-07-12 05:17:11,292 INFO [StoreOpener-f15ccd177424d9cbb896a9f34dc0202e-1] regionserver.HStore(310): Store=f15ccd177424d9cbb896a9f34dc0202e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:11,292 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=26 2023-07-12 05:17:11,292 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=26, state=SUCCESS; OpenRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,35711,1689139024278 in 208 msec 2023-07-12 05:17:11,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, REOPEN/MOVE in 582 msec 2023-07-12 05:17:11,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-12 05:17:11,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,38695,1689139027905 in 233 msec 2023-07-12 05:17:11,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, REOPEN/MOVE in 561 msec 2023-07-12 05:17:11,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f15ccd177424d9cbb896a9f34dc0202e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11837119200, jitterRate=0.10241763293743134}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:11,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f15ccd177424d9cbb896a9f34dc0202e: 2023-07-12 05:17:11,304 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e., pid=37, masterSystemTime=1689139031224 2023-07-12 05:17:11,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:11,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:11,307 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,307 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031307"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139031307"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139031307"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139031307"}]},"ts":"1689139031307"} 2023-07-12 05:17:11,312 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-12 05:17:11,312 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; OpenRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,35711,1689139024278 in 241 msec 2023-07-12 05:17:11,314 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, REOPEN/MOVE in 571 msec 2023-07-12 05:17:11,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-12 05:17:11,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_781802648. 2023-07-12 05:17:11,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:11,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:11,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:11,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:11,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:11,755 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:11,762 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:11,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:11,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:11,783 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139031783"}]},"ts":"1689139031783"} 2023-07-12 05:17:11,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 05:17:11,785 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 05:17:11,786 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 05:17:11,788 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, UNASSIGN}] 2023-07-12 05:17:11,791 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, UNASSIGN 2023-07-12 05:17:11,791 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, UNASSIGN 2023-07-12 05:17:11,791 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, UNASSIGN 2023-07-12 05:17:11,791 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, UNASSIGN 2023-07-12 05:17:11,791 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, UNASSIGN 2023-07-12 05:17:11,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,792 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,792 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031792"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031792"}]},"ts":"1689139031792"} 2023-07-12 05:17:11,792 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031792"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031792"}]},"ts":"1689139031792"} 2023-07-12 05:17:11,792 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:11,792 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,792 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031792"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031792"}]},"ts":"1689139031792"} 2023-07-12 05:17:11,792 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:11,793 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031792"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031792"}]},"ts":"1689139031792"} 2023-07-12 05:17:11,793 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031792"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139031792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139031792"}]},"ts":"1689139031792"} 2023-07-12 05:17:11,794 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=46, state=RUNNABLE; CloseRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:11,795 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=45, state=RUNNABLE; CloseRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:11,796 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=44, state=RUNNABLE; CloseRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:11,797 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=42, state=RUNNABLE; CloseRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:11,799 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=43, state=RUNNABLE; CloseRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:11,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 05:17:11,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7dfb5663eb15b58f699747f20ce07bbe, disabling compactions & flushes 2023-07-12 05:17:11,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. after waiting 0 ms 2023-07-12 05:17:11,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 64d6812eff3fb9c425bb88837aea8f91, disabling compactions & flushes 2023-07-12 05:17:11,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. after waiting 0 ms 2023-07-12 05:17:11,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:11,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe. 2023-07-12 05:17:11,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7dfb5663eb15b58f699747f20ce07bbe: 2023-07-12 05:17:11,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:11,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91. 2023-07-12 05:17:11,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 64d6812eff3fb9c425bb88837aea8f91: 2023-07-12 05:17:11,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:11,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing b693bcc3caf831949852e2d13444fbb5, disabling compactions & flushes 2023-07-12 05:17:11,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. after waiting 0 ms 2023-07-12 05:17:11,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,978 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=7dfb5663eb15b58f699747f20ce07bbe, regionState=CLOSED 2023-07-12 05:17:11,978 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031978"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139031978"}]},"ts":"1689139031978"} 2023-07-12 05:17:11,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:11,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:11,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 61f1866a9220de80031591e97bf6f03b, disabling compactions & flushes 2023-07-12 05:17:11,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:11,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:11,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. after waiting 0 ms 2023-07-12 05:17:11,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:11,981 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=64d6812eff3fb9c425bb88837aea8f91, regionState=CLOSED 2023-07-12 05:17:11,981 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139031981"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139031981"}]},"ts":"1689139031981"} 2023-07-12 05:17:11,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:11,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5. 2023-07-12 05:17:11,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for b693bcc3caf831949852e2d13444fbb5: 2023-07-12 05:17:11,998 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=44 2023-07-12 05:17:11,998 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; CloseRegionProcedure 64d6812eff3fb9c425bb88837aea8f91, server=jenkins-hbase20.apache.org,38695,1689139027905 in 193 msec 2023-07-12 05:17:11,998 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=43 2023-07-12 05:17:11,998 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=43, state=SUCCESS; CloseRegionProcedure 7dfb5663eb15b58f699747f20ce07bbe, server=jenkins-hbase20.apache.org,35711,1689139024278 in 186 msec 2023-07-12 05:17:11,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:11,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:11,999 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=b693bcc3caf831949852e2d13444fbb5, regionState=CLOSED 2023-07-12 05:17:11,999 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139031999"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139031999"}]},"ts":"1689139031999"} 2023-07-12 05:17:12,000 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7dfb5663eb15b58f699747f20ce07bbe, UNASSIGN in 210 msec 2023-07-12 05:17:12,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=64d6812eff3fb9c425bb88837aea8f91, UNASSIGN in 210 msec 2023-07-12 05:17:12,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=42 2023-07-12 05:17:12,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=42, state=SUCCESS; CloseRegionProcedure b693bcc3caf831949852e2d13444fbb5, server=jenkins-hbase20.apache.org,35711,1689139024278 in 204 msec 2023-07-12 05:17:12,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b693bcc3caf831949852e2d13444fbb5, UNASSIGN in 216 msec 2023-07-12 05:17:12,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f15ccd177424d9cbb896a9f34dc0202e, disabling compactions & flushes 2023-07-12 05:17:12,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:12,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:12,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. after waiting 0 ms 2023-07-12 05:17:12,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:12,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:12,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b. 2023-07-12 05:17:12,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 61f1866a9220de80031591e97bf6f03b: 2023-07-12 05:17:12,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:12,022 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=61f1866a9220de80031591e97bf6f03b, regionState=CLOSED 2023-07-12 05:17:12,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:12,022 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032022"}]},"ts":"1689139032022"} 2023-07-12 05:17:12,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=45 2023-07-12 05:17:12,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; CloseRegionProcedure 61f1866a9220de80031591e97bf6f03b, server=jenkins-hbase20.apache.org,38695,1689139027905 in 230 msec 2023-07-12 05:17:12,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e. 2023-07-12 05:17:12,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f15ccd177424d9cbb896a9f34dc0202e: 2023-07-12 05:17:12,037 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61f1866a9220de80031591e97bf6f03b, UNASSIGN in 240 msec 2023-07-12 05:17:12,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:12,038 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=f15ccd177424d9cbb896a9f34dc0202e, regionState=CLOSED 2023-07-12 05:17:12,038 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032038"}]},"ts":"1689139032038"} 2023-07-12 05:17:12,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=46 2023-07-12 05:17:12,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=46, state=SUCCESS; CloseRegionProcedure f15ccd177424d9cbb896a9f34dc0202e, server=jenkins-hbase20.apache.org,35711,1689139024278 in 246 msec 2023-07-12 05:17:12,045 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-12 05:17:12,045 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f15ccd177424d9cbb896a9f34dc0202e, UNASSIGN in 254 msec 2023-07-12 05:17:12,046 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139032046"}]},"ts":"1689139032046"} 2023-07-12 05:17:12,049 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 05:17:12,050 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 05:17:12,057 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 287 msec 2023-07-12 05:17:12,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 05:17:12,088 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-12 05:17:12,090 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:12,096 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$6(2260): Client=jenkins//148.251.75.209 truncate Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:12,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-12 05:17:12,110 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-12 05:17:12,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 05:17:12,129 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:12,129 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:12,129 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:12,129 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:12,129 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:12,136 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/recovered.edits] 2023-07-12 05:17:12,136 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/recovered.edits] 2023-07-12 05:17:12,137 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/recovered.edits] 2023-07-12 05:17:12,142 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/recovered.edits] 2023-07-12 05:17:12,150 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/recovered.edits] 2023-07-12 05:17:12,156 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 05:17:12,165 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e/recovered.edits/7.seqid 2023-07-12 05:17:12,165 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b/recovered.edits/7.seqid 2023-07-12 05:17:12,167 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe/recovered.edits/7.seqid 2023-07-12 05:17:12,168 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61f1866a9220de80031591e97bf6f03b 2023-07-12 05:17:12,175 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f15ccd177424d9cbb896a9f34dc0202e 2023-07-12 05:17:12,176 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91/recovered.edits/7.seqid 2023-07-12 05:17:12,176 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7dfb5663eb15b58f699747f20ce07bbe 2023-07-12 05:17:12,182 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/64d6812eff3fb9c425bb88837aea8f91 2023-07-12 05:17:12,192 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5/recovered.edits/7.seqid 2023-07-12 05:17:12,193 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b693bcc3caf831949852e2d13444fbb5 2023-07-12 05:17:12,193 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 05:17:12,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 05:17:12,257 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 05:17:12,265 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 05:17:12,266 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 05:17:12,266 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139032266"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:12,266 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139032266"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:12,266 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139032266"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:12,266 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139032266"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:12,267 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139032266"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:12,278 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 05:17:12,278 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b693bcc3caf831949852e2d13444fbb5, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139029492.b693bcc3caf831949852e2d13444fbb5.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7dfb5663eb15b58f699747f20ce07bbe, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139029492.7dfb5663eb15b58f699747f20ce07bbe.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 64d6812eff3fb9c425bb88837aea8f91, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139029492.64d6812eff3fb9c425bb88837aea8f91.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 61f1866a9220de80031591e97bf6f03b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139029492.61f1866a9220de80031591e97bf6f03b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f15ccd177424d9cbb896a9f34dc0202e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139029492.f15ccd177424d9cbb896a9f34dc0202e.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 05:17:12,278 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 05:17:12,278 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139032278"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:12,297 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 05:17:12,308 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,308 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,308 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,308 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,308 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,309 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 empty. 2023-07-12 05:17:12,309 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 empty. 2023-07-12 05:17:12,310 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 empty. 2023-07-12 05:17:12,310 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,310 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b empty. 2023-07-12 05:17:12,309 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 empty. 2023-07-12 05:17:12,311 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,311 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,311 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,311 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,311 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 05:17:12,312 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 05:17:12,312 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 05:17:12,314 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 05:17:12,315 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 05:17:12,315 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:12,315 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 05:17:12,316 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 05:17:12,316 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 05:17:12,343 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:12,344 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => d92e16b107efecd15bde27e3a574979b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:12,344 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2e6aeaefb92314951976fe8c78175315, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:12,347 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 0398ccbccb61d9f0661d3c786e226ab7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:12,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2e6aeaefb92314951976fe8c78175315, disabling compactions & flushes 2023-07-12 05:17:12,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,400 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing d92e16b107efecd15bde27e3a574979b, disabling compactions & flushes 2023-07-12 05:17:12,400 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,400 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. after waiting 0 ms 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. after waiting 0 ms 2023-07-12 05:17:12,401 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2e6aeaefb92314951976fe8c78175315: 2023-07-12 05:17:12,401 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,401 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for d92e16b107efecd15bde27e3a574979b: 2023-07-12 05:17:12,401 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 033df2b0d36dbd17939f0e0b26be23e2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:12,402 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => afc04df4397cd6b8c63171edf09ef2b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:12,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 0398ccbccb61d9f0661d3c786e226ab7, disabling compactions & flushes 2023-07-12 05:17:12,413 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. after waiting 0 ms 2023-07-12 05:17:12,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,413 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 0398ccbccb61d9f0661d3c786e226ab7: 2023-07-12 05:17:12,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 05:17:12,429 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 033df2b0d36dbd17939f0e0b26be23e2, disabling compactions & flushes 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing afc04df4397cd6b8c63171edf09ef2b4, disabling compactions & flushes 2023-07-12 05:17:12,430 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,430 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. after waiting 0 ms 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. after waiting 0 ms 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,430 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,430 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 033df2b0d36dbd17939f0e0b26be23e2: 2023-07-12 05:17:12,430 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for afc04df4397cd6b8c63171edf09ef2b4: 2023-07-12 05:17:12,435 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032434"}]},"ts":"1689139032434"} 2023-07-12 05:17:12,435 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032434"}]},"ts":"1689139032434"} 2023-07-12 05:17:12,435 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032434"}]},"ts":"1689139032434"} 2023-07-12 05:17:12,435 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032434"}]},"ts":"1689139032434"} 2023-07-12 05:17:12,435 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139032434"}]},"ts":"1689139032434"} 2023-07-12 05:17:12,438 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 05:17:12,440 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139032439"}]},"ts":"1689139032439"} 2023-07-12 05:17:12,442 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 05:17:12,445 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:12,446 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:12,446 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:12,446 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:12,448 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, ASSIGN}] 2023-07-12 05:17:12,451 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, ASSIGN 2023-07-12 05:17:12,451 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, ASSIGN 2023-07-12 05:17:12,451 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, ASSIGN 2023-07-12 05:17:12,451 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, ASSIGN 2023-07-12 05:17:12,451 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, ASSIGN 2023-07-12 05:17:12,452 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38695,1689139027905; forceNewPlan=false, retain=false 2023-07-12 05:17:12,452 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38695,1689139027905; forceNewPlan=false, retain=false 2023-07-12 05:17:12,452 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38695,1689139027905; forceNewPlan=false, retain=false 2023-07-12 05:17:12,452 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:12,453 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:12,603 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 05:17:12,607 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=0398ccbccb61d9f0661d3c786e226ab7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:12,607 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=afc04df4397cd6b8c63171edf09ef2b4, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:12,608 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032607"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139032607"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139032607"}]},"ts":"1689139032607"} 2023-07-12 05:17:12,608 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032607"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139032607"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139032607"}]},"ts":"1689139032607"} 2023-07-12 05:17:12,609 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=033df2b0d36dbd17939f0e0b26be23e2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:12,609 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=d92e16b107efecd15bde27e3a574979b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:12,609 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=2e6aeaefb92314951976fe8c78175315, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:12,609 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032609"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139032609"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139032609"}]},"ts":"1689139032609"} 2023-07-12 05:17:12,609 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032609"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139032609"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139032609"}]},"ts":"1689139032609"} 2023-07-12 05:17:12,609 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032609"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139032609"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139032609"}]},"ts":"1689139032609"} 2023-07-12 05:17:12,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=55, state=RUNNABLE; OpenRegionProcedure 0398ccbccb61d9f0661d3c786e226ab7, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:12,613 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=57, state=RUNNABLE; OpenRegionProcedure afc04df4397cd6b8c63171edf09ef2b4, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:12,615 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=56, state=RUNNABLE; OpenRegionProcedure 033df2b0d36dbd17939f0e0b26be23e2, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:12,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=54, state=RUNNABLE; OpenRegionProcedure 2e6aeaefb92314951976fe8c78175315, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:12,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=53, state=RUNNABLE; OpenRegionProcedure d92e16b107efecd15bde27e3a574979b, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:12,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 05:17:12,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0398ccbccb61d9f0661d3c786e226ab7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 05:17:12,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,773 INFO [StoreOpener-0398ccbccb61d9f0661d3c786e226ab7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,775 DEBUG [StoreOpener-0398ccbccb61d9f0661d3c786e226ab7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/f 2023-07-12 05:17:12,775 DEBUG [StoreOpener-0398ccbccb61d9f0661d3c786e226ab7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/f 2023-07-12 05:17:12,776 INFO [StoreOpener-0398ccbccb61d9f0661d3c786e226ab7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0398ccbccb61d9f0661d3c786e226ab7 columnFamilyName f 2023-07-12 05:17:12,777 INFO [StoreOpener-0398ccbccb61d9f0661d3c786e226ab7-1] regionserver.HStore(310): Store=0398ccbccb61d9f0661d3c786e226ab7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:12,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 033df2b0d36dbd17939f0e0b26be23e2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 05:17:12,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,779 INFO [StoreOpener-033df2b0d36dbd17939f0e0b26be23e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,781 DEBUG [StoreOpener-033df2b0d36dbd17939f0e0b26be23e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/f 2023-07-12 05:17:12,781 DEBUG [StoreOpener-033df2b0d36dbd17939f0e0b26be23e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/f 2023-07-12 05:17:12,781 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:12,781 INFO [StoreOpener-033df2b0d36dbd17939f0e0b26be23e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 033df2b0d36dbd17939f0e0b26be23e2 columnFamilyName f 2023-07-12 05:17:12,782 INFO [StoreOpener-033df2b0d36dbd17939f0e0b26be23e2-1] regionserver.HStore(310): Store=033df2b0d36dbd17939f0e0b26be23e2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:12,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:12,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:12,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:12,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 033df2b0d36dbd17939f0e0b26be23e2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11939160640, jitterRate=0.11192098259925842}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:12,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0398ccbccb61d9f0661d3c786e226ab7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9941504480, jitterRate=-0.0741252452135086}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:12,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 033df2b0d36dbd17939f0e0b26be23e2: 2023-07-12 05:17:12,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0398ccbccb61d9f0661d3c786e226ab7: 2023-07-12 05:17:12,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2., pid=60, masterSystemTime=1689139032773 2023-07-12 05:17:12,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7., pid=58, masterSystemTime=1689139032764 2023-07-12 05:17:12,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:12,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => afc04df4397cd6b8c63171edf09ef2b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 05:17:12,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,811 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=0398ccbccb61d9f0661d3c786e226ab7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:12,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,811 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032811"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139032811"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139032811"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139032811"}]},"ts":"1689139032811"} 2023-07-12 05:17:12,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:12,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d92e16b107efecd15bde27e3a574979b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 05:17:12,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,813 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=033df2b0d36dbd17939f0e0b26be23e2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:12,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,813 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032813"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139032813"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139032813"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139032813"}]},"ts":"1689139032813"} 2023-07-12 05:17:12,813 INFO [StoreOpener-afc04df4397cd6b8c63171edf09ef2b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,816 INFO [StoreOpener-d92e16b107efecd15bde27e3a574979b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,816 DEBUG [StoreOpener-afc04df4397cd6b8c63171edf09ef2b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/f 2023-07-12 05:17:12,816 DEBUG [StoreOpener-afc04df4397cd6b8c63171edf09ef2b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/f 2023-07-12 05:17:12,817 INFO [StoreOpener-afc04df4397cd6b8c63171edf09ef2b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region afc04df4397cd6b8c63171edf09ef2b4 columnFamilyName f 2023-07-12 05:17:12,818 INFO [StoreOpener-afc04df4397cd6b8c63171edf09ef2b4-1] regionserver.HStore(310): Store=afc04df4397cd6b8c63171edf09ef2b4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:12,818 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-12 05:17:12,818 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; OpenRegionProcedure 0398ccbccb61d9f0661d3c786e226ab7, server=jenkins-hbase20.apache.org,38695,1689139027905 in 203 msec 2023-07-12 05:17:12,819 DEBUG [StoreOpener-d92e16b107efecd15bde27e3a574979b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/f 2023-07-12 05:17:12,819 DEBUG [StoreOpener-d92e16b107efecd15bde27e3a574979b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/f 2023-07-12 05:17:12,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,820 INFO [StoreOpener-d92e16b107efecd15bde27e3a574979b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d92e16b107efecd15bde27e3a574979b columnFamilyName f 2023-07-12 05:17:12,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,821 INFO [StoreOpener-d92e16b107efecd15bde27e3a574979b-1] regionserver.HStore(310): Store=d92e16b107efecd15bde27e3a574979b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:12,821 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=56 2023-07-12 05:17:12,821 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=56, state=SUCCESS; OpenRegionProcedure 033df2b0d36dbd17939f0e0b26be23e2, server=jenkins-hbase20.apache.org,35711,1689139024278 in 201 msec 2023-07-12 05:17:12,822 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, ASSIGN in 370 msec 2023-07-12 05:17:12,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,824 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, ASSIGN in 373 msec 2023-07-12 05:17:12,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:12,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:12,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:12,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened afc04df4397cd6b8c63171edf09ef2b4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11555925760, jitterRate=0.07622945308685303}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:12,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for afc04df4397cd6b8c63171edf09ef2b4: 2023-07-12 05:17:12,830 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4., pid=59, masterSystemTime=1689139032764 2023-07-12 05:17:12,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:12,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,834 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d92e16b107efecd15bde27e3a574979b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10422977440, jitterRate=-0.029284581542015076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:12,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d92e16b107efecd15bde27e3a574979b: 2023-07-12 05:17:12,834 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:12,834 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2e6aeaefb92314951976fe8c78175315, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 05:17:12,834 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=afc04df4397cd6b8c63171edf09ef2b4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:12,835 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032834"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139032834"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139032834"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139032834"}]},"ts":"1689139032834"} 2023-07-12 05:17:12,835 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b., pid=62, masterSystemTime=1689139032773 2023-07-12 05:17:12,835 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,835 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:12,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,838 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:12,839 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=d92e16b107efecd15bde27e3a574979b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:12,840 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139032839"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139032839"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139032839"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139032839"}]},"ts":"1689139032839"} 2023-07-12 05:17:12,841 INFO [StoreOpener-2e6aeaefb92314951976fe8c78175315-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,843 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=57 2023-07-12 05:17:12,843 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=57, state=SUCCESS; OpenRegionProcedure afc04df4397cd6b8c63171edf09ef2b4, server=jenkins-hbase20.apache.org,38695,1689139027905 in 226 msec 2023-07-12 05:17:12,844 DEBUG [StoreOpener-2e6aeaefb92314951976fe8c78175315-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/f 2023-07-12 05:17:12,844 DEBUG [StoreOpener-2e6aeaefb92314951976fe8c78175315-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/f 2023-07-12 05:17:12,845 INFO [StoreOpener-2e6aeaefb92314951976fe8c78175315-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2e6aeaefb92314951976fe8c78175315 columnFamilyName f 2023-07-12 05:17:12,845 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=53 2023-07-12 05:17:12,845 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=53, state=SUCCESS; OpenRegionProcedure d92e16b107efecd15bde27e3a574979b, server=jenkins-hbase20.apache.org,35711,1689139024278 in 223 msec 2023-07-12 05:17:12,846 INFO [StoreOpener-2e6aeaefb92314951976fe8c78175315-1] regionserver.HStore(310): Store=2e6aeaefb92314951976fe8c78175315/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:12,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, ASSIGN in 396 msec 2023-07-12 05:17:12,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, ASSIGN in 399 msec 2023-07-12 05:17:12,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:12,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:12,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2e6aeaefb92314951976fe8c78175315; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11054402880, jitterRate=0.02952149510383606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:12,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2e6aeaefb92314951976fe8c78175315: 2023-07-12 05:17:12,857 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315., pid=61, masterSystemTime=1689139032764 2023-07-12 05:17:12,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,860 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:12,861 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=2e6aeaefb92314951976fe8c78175315, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:12,861 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139032861"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139032861"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139032861"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139032861"}]},"ts":"1689139032861"} 2023-07-12 05:17:12,866 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=54 2023-07-12 05:17:12,866 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=54, state=SUCCESS; OpenRegionProcedure 2e6aeaefb92314951976fe8c78175315, server=jenkins-hbase20.apache.org,38695,1689139027905 in 247 msec 2023-07-12 05:17:12,868 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=52 2023-07-12 05:17:12,868 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, ASSIGN in 418 msec 2023-07-12 05:17:12,868 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139032868"}]},"ts":"1689139032868"} 2023-07-12 05:17:12,870 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 05:17:12,872 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-12 05:17:12,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 773 msec 2023-07-12 05:17:13,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 05:17:13,232 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-12 05:17:13,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:13,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:13,236 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 05:17:13,243 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139033243"}]},"ts":"1689139033243"} 2023-07-12 05:17:13,245 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 05:17:13,247 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 05:17:13,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, UNASSIGN}] 2023-07-12 05:17:13,252 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, UNASSIGN 2023-07-12 05:17:13,252 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, UNASSIGN 2023-07-12 05:17:13,253 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, UNASSIGN 2023-07-12 05:17:13,253 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, UNASSIGN 2023-07-12 05:17:13,253 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, UNASSIGN 2023-07-12 05:17:13,255 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=d92e16b107efecd15bde27e3a574979b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:13,255 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139033255"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139033255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139033255"}]},"ts":"1689139033255"} 2023-07-12 05:17:13,256 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=2e6aeaefb92314951976fe8c78175315, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:13,256 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139033256"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139033256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139033256"}]},"ts":"1689139033256"} 2023-07-12 05:17:13,256 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=afc04df4397cd6b8c63171edf09ef2b4, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:13,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139033256"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139033256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139033256"}]},"ts":"1689139033256"} 2023-07-12 05:17:13,256 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=0398ccbccb61d9f0661d3c786e226ab7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:13,257 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=033df2b0d36dbd17939f0e0b26be23e2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:13,258 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139033256"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139033256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139033256"}]},"ts":"1689139033256"} 2023-07-12 05:17:13,258 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139033256"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139033256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139033256"}]},"ts":"1689139033256"} 2023-07-12 05:17:13,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=64, state=RUNNABLE; CloseRegionProcedure d92e16b107efecd15bde27e3a574979b, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:13,280 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure 2e6aeaefb92314951976fe8c78175315, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:13,281 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=68, state=RUNNABLE; CloseRegionProcedure afc04df4397cd6b8c63171edf09ef2b4, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:13,283 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=67, state=RUNNABLE; CloseRegionProcedure 033df2b0d36dbd17939f0e0b26be23e2, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:13,285 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=66, state=RUNNABLE; CloseRegionProcedure 0398ccbccb61d9f0661d3c786e226ab7, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:13,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 05:17:13,430 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:13,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 033df2b0d36dbd17939f0e0b26be23e2, disabling compactions & flushes 2023-07-12 05:17:13,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:13,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:13,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. after waiting 0 ms 2023-07-12 05:17:13,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:13,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:13,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2. 2023-07-12 05:17:13,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 033df2b0d36dbd17939f0e0b26be23e2: 2023-07-12 05:17:13,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:13,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2e6aeaefb92314951976fe8c78175315, disabling compactions & flushes 2023-07-12 05:17:13,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:13,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:13,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. after waiting 0 ms 2023-07-12 05:17:13,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:13,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:13,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:13,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d92e16b107efecd15bde27e3a574979b, disabling compactions & flushes 2023-07-12 05:17:13,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:13,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:13,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. after waiting 0 ms 2023-07-12 05:17:13,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:13,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=033df2b0d36dbd17939f0e0b26be23e2, regionState=CLOSED 2023-07-12 05:17:13,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139033442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139033442"}]},"ts":"1689139033442"} 2023-07-12 05:17:13,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:13,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315. 2023-07-12 05:17:13,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2e6aeaefb92314951976fe8c78175315: 2023-07-12 05:17:13,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=67 2023-07-12 05:17:13,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=67, state=SUCCESS; CloseRegionProcedure 033df2b0d36dbd17939f0e0b26be23e2, server=jenkins-hbase20.apache.org,35711,1689139024278 in 161 msec 2023-07-12 05:17:13,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:13,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:13,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing afc04df4397cd6b8c63171edf09ef2b4, disabling compactions & flushes 2023-07-12 05:17:13,449 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=2e6aeaefb92314951976fe8c78175315, regionState=CLOSED 2023-07-12 05:17:13,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:13,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:13,449 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139033448"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139033448"}]},"ts":"1689139033448"} 2023-07-12 05:17:13,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. after waiting 0 ms 2023-07-12 05:17:13,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:13,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=033df2b0d36dbd17939f0e0b26be23e2, UNASSIGN in 198 msec 2023-07-12 05:17:13,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:13,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-12 05:17:13,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure 2e6aeaefb92314951976fe8c78175315, server=jenkins-hbase20.apache.org,38695,1689139027905 in 172 msec 2023-07-12 05:17:13,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4. 2023-07-12 05:17:13,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for afc04df4397cd6b8c63171edf09ef2b4: 2023-07-12 05:17:13,457 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2e6aeaefb92314951976fe8c78175315, UNASSIGN in 207 msec 2023-07-12 05:17:13,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:13,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:13,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:13,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0398ccbccb61d9f0661d3c786e226ab7, disabling compactions & flushes 2023-07-12 05:17:13,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:13,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:13,460 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=afc04df4397cd6b8c63171edf09ef2b4, regionState=CLOSED 2023-07-12 05:17:13,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. after waiting 0 ms 2023-07-12 05:17:13,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:13,460 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139033460"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139033460"}]},"ts":"1689139033460"} 2023-07-12 05:17:13,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b. 2023-07-12 05:17:13,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d92e16b107efecd15bde27e3a574979b: 2023-07-12 05:17:13,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:13,463 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=d92e16b107efecd15bde27e3a574979b, regionState=CLOSED 2023-07-12 05:17:13,463 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689139033463"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139033463"}]},"ts":"1689139033463"} 2023-07-12 05:17:13,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:13,467 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=68 2023-07-12 05:17:13,467 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=68, state=SUCCESS; CloseRegionProcedure afc04df4397cd6b8c63171edf09ef2b4, server=jenkins-hbase20.apache.org,38695,1689139027905 in 181 msec 2023-07-12 05:17:13,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7. 2023-07-12 05:17:13,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0398ccbccb61d9f0661d3c786e226ab7: 2023-07-12 05:17:13,470 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=64 2023-07-12 05:17:13,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=afc04df4397cd6b8c63171edf09ef2b4, UNASSIGN in 218 msec 2023-07-12 05:17:13,470 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=64, state=SUCCESS; CloseRegionProcedure d92e16b107efecd15bde27e3a574979b, server=jenkins-hbase20.apache.org,35711,1689139024278 in 199 msec 2023-07-12 05:17:13,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:13,471 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=0398ccbccb61d9f0661d3c786e226ab7, regionState=CLOSED 2023-07-12 05:17:13,471 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689139033471"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139033471"}]},"ts":"1689139033471"} 2023-07-12 05:17:13,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d92e16b107efecd15bde27e3a574979b, UNASSIGN in 221 msec 2023-07-12 05:17:13,480 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=66 2023-07-12 05:17:13,480 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=66, state=SUCCESS; CloseRegionProcedure 0398ccbccb61d9f0661d3c786e226ab7, server=jenkins-hbase20.apache.org,38695,1689139027905 in 188 msec 2023-07-12 05:17:13,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=63 2023-07-12 05:17:13,491 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0398ccbccb61d9f0661d3c786e226ab7, UNASSIGN in 231 msec 2023-07-12 05:17:13,492 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139033492"}]},"ts":"1689139033492"} 2023-07-12 05:17:13,494 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 05:17:13,496 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 05:17:13,498 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 260 msec 2023-07-12 05:17:13,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 05:17:13,547 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-12 05:17:13,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,564 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_781802648' 2023-07-12 05:17:13,565 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:13,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-12 05:17:13,581 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:13,581 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:13,581 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:13,581 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:13,581 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:13,585 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/recovered.edits] 2023-07-12 05:17:13,585 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/recovered.edits] 2023-07-12 05:17:13,589 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/recovered.edits] 2023-07-12 05:17:13,589 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/recovered.edits] 2023-07-12 05:17:13,590 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/recovered.edits] 2023-07-12 05:17:13,609 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7/recovered.edits/4.seqid 2023-07-12 05:17:13,609 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b/recovered.edits/4.seqid 2023-07-12 05:17:13,610 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4/recovered.edits/4.seqid 2023-07-12 05:17:13,610 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0398ccbccb61d9f0661d3c786e226ab7 2023-07-12 05:17:13,610 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315/recovered.edits/4.seqid 2023-07-12 05:17:13,611 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2/recovered.edits/4.seqid 2023-07-12 05:17:13,611 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d92e16b107efecd15bde27e3a574979b 2023-07-12 05:17:13,612 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/afc04df4397cd6b8c63171edf09ef2b4 2023-07-12 05:17:13,612 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2e6aeaefb92314951976fe8c78175315 2023-07-12 05:17:13,612 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/033df2b0d36dbd17939f0e0b26be23e2 2023-07-12 05:17:13,612 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 05:17:13,615 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,622 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 05:17:13,625 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 05:17:13,627 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,627 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 05:17:13,627 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139033627"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:13,627 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139033627"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:13,627 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139033627"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:13,628 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139033627"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:13,628 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139033627"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:13,630 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 05:17:13,630 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => d92e16b107efecd15bde27e3a574979b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689139032199.d92e16b107efecd15bde27e3a574979b.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2e6aeaefb92314951976fe8c78175315, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689139032199.2e6aeaefb92314951976fe8c78175315.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 0398ccbccb61d9f0661d3c786e226ab7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689139032199.0398ccbccb61d9f0661d3c786e226ab7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 033df2b0d36dbd17939f0e0b26be23e2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689139032199.033df2b0d36dbd17939f0e0b26be23e2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => afc04df4397cd6b8c63171edf09ef2b4, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689139032199.afc04df4397cd6b8c63171edf09ef2b4.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 05:17:13,630 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 05:17:13,630 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139033630"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:13,632 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 05:17:13,635 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 05:17:13,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 81 msec 2023-07-12 05:17:13,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-12 05:17:13,680 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-12 05:17:13,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:13,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:13,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:13,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:13,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup default 2023-07-12 05:17:13,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:13,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_781802648, current retry=0 2023-07-12 05:17:13,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_781802648 => default 2023-07-12 05:17:13,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:13,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testTableMoveTruncateAndDrop_781802648 2023-07-12 05:17:13,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:13,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:13,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:13,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:13,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:13,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:13,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:13,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:13,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:13,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:13,749 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:13,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:13,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:13,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:13,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,761 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:13,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:13,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140233761, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:13,762 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:13,764 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:13,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,766 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:13,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:13,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:13,794 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=495 (was 422) Potentially hanging thread: qtp1066350127-636-acceptor-0@62ecfdd3-ServerConnector@7b5edea0{HTTP/1.1, (http/1.1)}{0.0.0.0:36495} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-cf431ad-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1066350127-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62508@0x2aff626f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62508@0x2aff626f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1066350127-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62508@0x2aff626f-SendThread(127.0.0.1:62508) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase20:38695-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-406622829-148.251.75.209-1689139018115:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1242647302_17 at /127.0.0.1:33748 [Receiving block BP-406622829-148.251.75.209-1689139018115:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:35039 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1066350127-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase20:38695 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_661179543_17 at /127.0.0.1:33276 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1066350127-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1066350127-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-406622829-148.251.75.209-1689139018115:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e-prefix:jenkins-hbase20.apache.org,38695,1689139027905 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:35039 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1066350127-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1242647302_17 at /127.0.0.1:56118 [Receiving block BP-406622829-148.251.75.209-1689139018115:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:38695Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1242647302_17 at /127.0.0.1:33698 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1066350127-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1242647302_17 at /127.0.0.1:33398 [Receiving block BP-406622829-148.251.75.209-1689139018115:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-406622829-148.251.75.209-1689139018115:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=784 (was 695) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=624 (was 669), ProcessCount=170 (was 170), AvailableMemoryMB=3366 (was 3472) 2023-07-12 05:17:13,812 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=495, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=624, ProcessCount=170, AvailableMemoryMB=3361 2023-07-12 05:17:13,816 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-12 05:17:13,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:13,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:13,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:13,824 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:13,824 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:13,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:13,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:13,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:13,834 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:13,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:13,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:13,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:13,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:13,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:13,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140233853, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:13,854 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:13,856 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:13,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,857 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:13,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:13,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:13,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo* 2023-07-12 05:17:13,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:13,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:54108 deadline: 1689140233859, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 05:17:13,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo@ 2023-07-12 05:17:13,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:13,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:54108 deadline: 1689140233861, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 05:17:13,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup - 2023-07-12 05:17:13,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:13,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 148.251.75.209:54108 deadline: 1689140233862, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 05:17:13,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo_123 2023-07-12 05:17:13,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-12 05:17:13,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:13,869 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:13,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:13,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:13,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:13,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:13,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:13,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup foo_123 2023-07-12 05:17:13,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:13,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:13,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:13,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:13,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:13,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:13,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:13,911 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:13,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:13,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:13,927 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:13,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:13,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:13,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:13,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:13,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:13,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:13,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:13,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140233947, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:13,948 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:13,950 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:13,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:13,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:13,952 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:13,953 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:13,954 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:13,991 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 495) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=784 (was 784), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=624 (was 624), ProcessCount=170 (was 170), AvailableMemoryMB=3324 (was 3361) 2023-07-12 05:17:14,013 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=624, ProcessCount=170, AvailableMemoryMB=3321 2023-07-12 05:17:14,013 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-12 05:17:14,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:14,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:14,022 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:14,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:14,023 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:14,024 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:14,024 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:14,031 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:14,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:14,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:14,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:14,043 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:14,044 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:14,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:14,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:14,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:14,052 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:14,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:14,057 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:14,059 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:14,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:14,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140234059, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:14,060 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:14,062 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:14,063 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:14,063 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:14,064 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:14,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:14,065 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:14,066 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:14,066 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:14,067 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:14,067 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:14,068 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup bar 2023-07-12 05:17:14,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:14,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 05:17:14,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:14,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:14,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:14,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:14,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:14,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:44619] to rsgroup bar 2023-07-12 05:17:14,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:14,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 05:17:14,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:14,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:14,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(238): Moving server region 65a59a940eb599446f9a504f8dbf75d7, which do not belong to RSGroup bar 2023-07-12 05:17:14,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE 2023-07-12 05:17:14,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 05:17:14,092 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE 2023-07-12 05:17:14,094 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:14,094 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139034094"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139034094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139034094"}]},"ts":"1689139034094"} 2023-07-12 05:17:14,096 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:14,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 65a59a940eb599446f9a504f8dbf75d7, disabling compactions & flushes 2023-07-12 05:17:14,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. after waiting 0 ms 2023-07-12 05:17:14,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 65a59a940eb599446f9a504f8dbf75d7 1/1 column families, dataSize=5.05 KB heapSize=8.49 KB 2023-07-12 05:17:14,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.05 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/.tmp/m/51fce7f157a34b1d82db4d2fe1bdef4f 2023-07-12 05:17:14,292 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51fce7f157a34b1d82db4d2fe1bdef4f 2023-07-12 05:17:14,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/.tmp/m/51fce7f157a34b1d82db4d2fe1bdef4f as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/51fce7f157a34b1d82db4d2fe1bdef4f 2023-07-12 05:17:14,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51fce7f157a34b1d82db4d2fe1bdef4f 2023-07-12 05:17:14,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/51fce7f157a34b1d82db4d2fe1bdef4f, entries=9, sequenceid=32, filesize=5.5 K 2023-07-12 05:17:14,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.05 KB/5168, heapSize ~8.48 KB/8680, currentSize=0 B/0 for 65a59a940eb599446f9a504f8dbf75d7 in 56ms, sequenceid=32, compaction requested=false 2023-07-12 05:17:14,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-12 05:17:14,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:14,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:14,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 65a59a940eb599446f9a504f8dbf75d7 move to jenkins-hbase20.apache.org,46611,1689139023835 record at close sequenceid=32 2023-07-12 05:17:14,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,333 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=CLOSED 2023-07-12 05:17:14,333 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139034333"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139034333"}]},"ts":"1689139034333"} 2023-07-12 05:17:14,340 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-12 05:17:14,340 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,44619,1689139024083 in 240 msec 2023-07-12 05:17:14,341 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:14,492 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:14,492 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139034491"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139034491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139034491"}]},"ts":"1689139034491"} 2023-07-12 05:17:14,497 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:14,656 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,656 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65a59a940eb599446f9a504f8dbf75d7, NAME => 'hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:14,656 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:14,656 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. service=MultiRowMutationService 2023-07-12 05:17:14,657 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 05:17:14,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:14,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,659 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,660 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m 2023-07-12 05:17:14,660 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m 2023-07-12 05:17:14,661 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65a59a940eb599446f9a504f8dbf75d7 columnFamilyName m 2023-07-12 05:17:14,683 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 51fce7f157a34b1d82db4d2fe1bdef4f 2023-07-12 05:17:14,683 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/51fce7f157a34b1d82db4d2fe1bdef4f 2023-07-12 05:17:14,696 DEBUG [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/e2b09ff2dfa141dfa423e74ff8b122c6 2023-07-12 05:17:14,696 INFO [StoreOpener-65a59a940eb599446f9a504f8dbf75d7-1] regionserver.HStore(310): Store=65a59a940eb599446f9a504f8dbf75d7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:14,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,700 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,704 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:14,705 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 65a59a940eb599446f9a504f8dbf75d7; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@11127179, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:14,705 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:14,706 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7., pid=77, masterSystemTime=1689139034651 2023-07-12 05:17:14,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,708 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:14,708 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=65a59a940eb599446f9a504f8dbf75d7, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:14,709 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139034708"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139034708"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139034708"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139034708"}]},"ts":"1689139034708"} 2023-07-12 05:17:14,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-12 05:17:14,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure 65a59a940eb599446f9a504f8dbf75d7, server=jenkins-hbase20.apache.org,46611,1689139023835 in 217 msec 2023-07-12 05:17:14,715 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=65a59a940eb599446f9a504f8dbf75d7, REOPEN/MOVE in 624 msec 2023-07-12 05:17:15,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-12 05:17:15,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905, jenkins-hbase20.apache.org,44619,1689139024083] are moved back to default 2023-07-12 05:17:15,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-12 05:17:15,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:15,094 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44619] ipc.CallRunner(144): callId: 13 service: ClientService methodName: Scan size: 136 connection: 148.251.75.209:52726 deadline: 1689139095093, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=46611 startCode=1689139023835. As of locationSeqNum=32. 2023-07-12 05:17:15,211 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:15,212 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:15,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-12 05:17:15,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:15,217 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:15,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:15,221 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:15,221 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-12 05:17:15,222 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44619] ipc.CallRunner(144): callId: 188 service: ClientService methodName: ExecService size: 532 connection: 148.251.75.209:52728 deadline: 1689139095222, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=46611 startCode=1689139023835. As of locationSeqNum=32. 2023-07-12 05:17:15,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 05:17:15,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 05:17:15,327 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:15,327 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 05:17:15,328 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:15,328 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:15,330 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:15,332 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,333 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 empty. 2023-07-12 05:17:15,333 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,333 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 05:17:15,356 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:15,357 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c9b2f1beae123697be3d6f97dad7b919, NAME => 'Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:15,380 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:15,380 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing c9b2f1beae123697be3d6f97dad7b919, disabling compactions & flushes 2023-07-12 05:17:15,380 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,380 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,380 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. after waiting 0 ms 2023-07-12 05:17:15,380 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,381 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,381 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:15,387 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:15,388 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139035388"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139035388"}]},"ts":"1689139035388"} 2023-07-12 05:17:15,390 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:15,391 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:15,391 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139035391"}]},"ts":"1689139035391"} 2023-07-12 05:17:15,392 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-12 05:17:15,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, ASSIGN}] 2023-07-12 05:17:15,397 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, ASSIGN 2023-07-12 05:17:15,398 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:15,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 05:17:15,549 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:15,549 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139035549"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139035549"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139035549"}]},"ts":"1689139035549"} 2023-07-12 05:17:15,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:15,708 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c9b2f1beae123697be3d6f97dad7b919, NAME => 'Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:15,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:15,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,710 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,711 DEBUG [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f 2023-07-12 05:17:15,711 DEBUG [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f 2023-07-12 05:17:15,712 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c9b2f1beae123697be3d6f97dad7b919 columnFamilyName f 2023-07-12 05:17:15,712 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] regionserver.HStore(310): Store=c9b2f1beae123697be3d6f97dad7b919/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:15,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:15,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:15,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c9b2f1beae123697be3d6f97dad7b919; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10478193120, jitterRate=-0.024142220616340637}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:15,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:15,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919., pid=80, masterSystemTime=1689139035704 2023-07-12 05:17:15,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:15,722 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:15,722 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139035722"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139035722"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139035722"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139035722"}]},"ts":"1689139035722"} 2023-07-12 05:17:15,726 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-12 05:17:15,726 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835 in 172 msec 2023-07-12 05:17:15,728 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-12 05:17:15,729 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, ASSIGN in 331 msec 2023-07-12 05:17:15,729 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:15,729 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139035729"}]},"ts":"1689139035729"} 2023-07-12 05:17:15,731 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-12 05:17:15,733 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:15,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 516 msec 2023-07-12 05:17:15,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 05:17:15,827 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-12 05:17:15,827 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-12 05:17:15,827 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:15,836 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-12 05:17:15,837 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:15,837 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-12 05:17:15,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-12 05:17:15,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:15,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 05:17:15,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:15,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:15,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-12 05:17:15,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region c9b2f1beae123697be3d6f97dad7b919 to RSGroup bar 2023-07-12 05:17:15,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:15,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:15,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:15,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:15,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 05:17:15,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:15,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE 2023-07-12 05:17:15,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-12 05:17:15,853 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE 2023-07-12 05:17:15,854 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:15,855 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139035854"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139035854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139035854"}]},"ts":"1689139035854"} 2023-07-12 05:17:15,857 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:16,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c9b2f1beae123697be3d6f97dad7b919, disabling compactions & flushes 2023-07-12 05:17:16,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. after waiting 0 ms 2023-07-12 05:17:16,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:16,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:16,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding c9b2f1beae123697be3d6f97dad7b919 move to jenkins-hbase20.apache.org,44619,1689139024083 record at close sequenceid=2 2023-07-12 05:17:16,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,022 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=CLOSED 2023-07-12 05:17:16,023 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139036022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139036022"}]},"ts":"1689139036022"} 2023-07-12 05:17:16,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-12 05:17:16,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835 in 167 msec 2023-07-12 05:17:16,027 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:16,177 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:16,177 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:16,178 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139036177"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139036177"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139036177"}]},"ts":"1689139036177"} 2023-07-12 05:17:16,180 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:16,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c9b2f1beae123697be3d6f97dad7b919, NAME => 'Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:16,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:16,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,337 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,338 DEBUG [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f 2023-07-12 05:17:16,338 DEBUG [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f 2023-07-12 05:17:16,339 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c9b2f1beae123697be3d6f97dad7b919 columnFamilyName f 2023-07-12 05:17:16,340 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] regionserver.HStore(310): Store=c9b2f1beae123697be3d6f97dad7b919/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:16,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:16,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c9b2f1beae123697be3d6f97dad7b919; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9477549280, jitterRate=-0.11733444035053253}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:16,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:16,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919., pid=83, masterSystemTime=1689139036332 2023-07-12 05:17:16,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:16,349 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:16,349 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139036349"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139036349"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139036349"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139036349"}]},"ts":"1689139036349"} 2023-07-12 05:17:16,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-12 05:17:16,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,44619,1689139024083 in 171 msec 2023-07-12 05:17:16,354 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE in 503 msec 2023-07-12 05:17:16,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-12 05:17:16,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-12 05:17:16,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:16,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:16,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:16,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-12 05:17:16,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:16,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-12 05:17:16,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:16,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 284 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:54108 deadline: 1689140236866, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-12 05:17:16,867 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:44619] to rsgroup default 2023-07-12 05:17:16,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:16,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 191 connection: 148.251.75.209:54108 deadline: 1689140236867, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-12 05:17:16,869 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-12 05:17:16,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:16,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 05:17:16,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:16,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:16,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-12 05:17:16,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region c9b2f1beae123697be3d6f97dad7b919 to RSGroup default 2023-07-12 05:17:16,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE 2023-07-12 05:17:16,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 05:17:16,877 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE 2023-07-12 05:17:16,878 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:16,878 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139036878"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139036878"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139036878"}]},"ts":"1689139036878"} 2023-07-12 05:17:16,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:17,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c9b2f1beae123697be3d6f97dad7b919, disabling compactions & flushes 2023-07-12 05:17:17,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. after waiting 0 ms 2023-07-12 05:17:17,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:17,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:17,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding c9b2f1beae123697be3d6f97dad7b919 move to jenkins-hbase20.apache.org,46611,1689139023835 record at close sequenceid=5 2023-07-12 05:17:17,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,046 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=CLOSED 2023-07-12 05:17:17,047 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139037046"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139037046"}]},"ts":"1689139037046"} 2023-07-12 05:17:17,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 05:17:17,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,44619,1689139024083 in 169 msec 2023-07-12 05:17:17,051 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:17,201 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:17,201 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139037201"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139037201"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139037201"}]},"ts":"1689139037201"} 2023-07-12 05:17:17,204 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:17,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c9b2f1beae123697be3d6f97dad7b919, NAME => 'Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:17,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:17,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,363 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,364 DEBUG [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f 2023-07-12 05:17:17,364 DEBUG [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f 2023-07-12 05:17:17,365 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c9b2f1beae123697be3d6f97dad7b919 columnFamilyName f 2023-07-12 05:17:17,365 INFO [StoreOpener-c9b2f1beae123697be3d6f97dad7b919-1] regionserver.HStore(310): Store=c9b2f1beae123697be3d6f97dad7b919/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:17,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:17,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c9b2f1beae123697be3d6f97dad7b919; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11805232640, jitterRate=0.09944796562194824}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:17,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:17,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919., pid=86, masterSystemTime=1689139037356 2023-07-12 05:17:17,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,375 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:17,375 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:17,375 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139037375"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139037375"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139037375"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139037375"}]},"ts":"1689139037375"} 2023-07-12 05:17:17,378 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-12 05:17:17,378 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835 in 173 msec 2023-07-12 05:17:17,380 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, REOPEN/MOVE in 503 msec 2023-07-12 05:17:17,477 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 05:17:17,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-12 05:17:17,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-12 05:17:17,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:17,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:17,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:17,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-12 05:17:17,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:17,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 293 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:54108 deadline: 1689140237886, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-12 05:17:17,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:44619] to rsgroup default 2023-07-12 05:17:17,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 05:17:17,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:17,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:17,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-12 05:17:17,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905, jenkins-hbase20.apache.org,44619,1689139024083] are moved back to bar 2023-07-12 05:17:17,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-12 05:17:17,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:17,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:17,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:17,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-12 05:17:17,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:17,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:17,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:17,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:17,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:17,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:17,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:17,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:17,916 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-12 05:17:17,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testFailRemoveGroup 2023-07-12 05:17:17,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:17,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 05:17:17,921 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139037921"}]},"ts":"1689139037921"} 2023-07-12 05:17:17,922 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-12 05:17:17,923 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-12 05:17:17,924 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, UNASSIGN}] 2023-07-12 05:17:17,928 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, UNASSIGN 2023-07-12 05:17:17,928 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:17,929 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139037928"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139037928"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139037928"}]},"ts":"1689139037928"} 2023-07-12 05:17:17,932 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:18,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 05:17:18,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:18,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c9b2f1beae123697be3d6f97dad7b919, disabling compactions & flushes 2023-07-12 05:17:18,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:18,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:18,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. after waiting 0 ms 2023-07-12 05:17:18,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:18,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 05:17:18,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919. 2023-07-12 05:17:18,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c9b2f1beae123697be3d6f97dad7b919: 2023-07-12 05:17:18,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:18,102 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=c9b2f1beae123697be3d6f97dad7b919, regionState=CLOSED 2023-07-12 05:17:18,102 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689139038102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139038102"}]},"ts":"1689139038102"} 2023-07-12 05:17:18,106 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-12 05:17:18,106 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure c9b2f1beae123697be3d6f97dad7b919, server=jenkins-hbase20.apache.org,46611,1689139023835 in 174 msec 2023-07-12 05:17:18,108 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-12 05:17:18,108 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=c9b2f1beae123697be3d6f97dad7b919, UNASSIGN in 182 msec 2023-07-12 05:17:18,109 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139038109"}]},"ts":"1689139038109"} 2023-07-12 05:17:18,110 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-12 05:17:18,112 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-12 05:17:18,114 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 195 msec 2023-07-12 05:17:18,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 05:17:18,224 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-12 05:17:18,224 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testFailRemoveGroup 2023-07-12 05:17:18,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:18,228 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:18,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-12 05:17:18,229 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:18,234 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:18,236 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits] 2023-07-12 05:17:18,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:18,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:18,243 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits/10.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919/recovered.edits/10.seqid 2023-07-12 05:17:18,243 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testFailRemoveGroup/c9b2f1beae123697be3d6f97dad7b919 2023-07-12 05:17:18,244 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 05:17:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-12 05:17:18,249 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:18,252 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-12 05:17:18,259 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-12 05:17:18,262 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:18,262 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-12 05:17:18,262 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139038262"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:18,264 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 05:17:18,265 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c9b2f1beae123697be3d6f97dad7b919, NAME => 'Group_testFailRemoveGroup,,1689139035217.c9b2f1beae123697be3d6f97dad7b919.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 05:17:18,265 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-12 05:17:18,265 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139038265"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:18,277 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-12 05:17:18,279 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 05:17:18,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 54 msec 2023-07-12 05:17:18,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-12 05:17:18,346 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-12 05:17:18,350 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,351 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,352 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:18,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:18,352 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:18,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:18,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:18,354 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:18,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:18,360 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:18,364 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:18,365 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:18,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:18,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:18,378 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:18,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:18,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:18,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 341 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140238396, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:18,397 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:18,399 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:18,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,401 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:18,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:18,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:18,424 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=503 (was 498) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_468688637_17 at /127.0.0.1:48792 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_468688637_17 at /127.0.0.1:42520 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1536315787_17 at /127.0.0.1:33276 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x61c28258-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_468688637_17 at /127.0.0.1:48806 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-26352938_17 at /127.0.0.1:42536 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 784) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=606 (was 624), ProcessCount=170 (was 170), AvailableMemoryMB=3096 (was 3321) 2023-07-12 05:17:18,425 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 05:17:18,444 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=503, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=606, ProcessCount=170, AvailableMemoryMB=3093 2023-07-12 05:17:18,444 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 05:17:18,444 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-12 05:17:18,449 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,449 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:18,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:18,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:18,451 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:18,451 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:18,452 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:18,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:18,457 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:18,461 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:18,462 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:18,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:18,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:18,483 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:18,487 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,488 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:18,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:18,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 369 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140238490, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:18,491 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:18,496 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:18,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,498 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:18,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:18,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:18,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:18,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:18,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:18,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:18,506 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:18,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35711] to rsgroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:18,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:18,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 05:17:18,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278] are moved back to default 2023-07-12 05:17:18,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:18,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:18,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:18,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:18,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:18,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:18,530 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:18,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-12 05:17:18,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 05:17:18,532 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:18,533 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:18,533 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:18,534 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:18,536 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:18,538 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,538 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b empty. 2023-07-12 05:17:18,539 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,539 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 05:17:18,560 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:18,561 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5d43f7ab14b181844b3bf8cd2694fc1b, NAME => 'GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:18,579 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:18,579 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 5d43f7ab14b181844b3bf8cd2694fc1b, disabling compactions & flushes 2023-07-12 05:17:18,579 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,579 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,579 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. after waiting 0 ms 2023-07-12 05:17:18,579 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,580 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,580 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 5d43f7ab14b181844b3bf8cd2694fc1b: 2023-07-12 05:17:18,582 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:18,583 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139038582"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139038582"}]},"ts":"1689139038582"} 2023-07-12 05:17:18,584 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:18,585 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:18,585 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139038585"}]},"ts":"1689139038585"} 2023-07-12 05:17:18,587 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-12 05:17:18,591 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:18,592 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:18,592 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:18,592 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:18,592 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:18,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, ASSIGN}] 2023-07-12 05:17:18,595 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, ASSIGN 2023-07-12 05:17:18,596 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:18,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 05:17:18,746 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:18,747 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:18,748 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139038747"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139038747"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139038747"}]},"ts":"1689139038747"} 2023-07-12 05:17:18,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:18,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 05:17:18,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d43f7ab14b181844b3bf8cd2694fc1b, NAME => 'GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:18,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:18,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,908 INFO [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,910 DEBUG [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/f 2023-07-12 05:17:18,910 DEBUG [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/f 2023-07-12 05:17:18,911 INFO [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d43f7ab14b181844b3bf8cd2694fc1b columnFamilyName f 2023-07-12 05:17:18,911 INFO [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] regionserver.HStore(310): Store=5d43f7ab14b181844b3bf8cd2694fc1b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:18,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:18,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:18,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5d43f7ab14b181844b3bf8cd2694fc1b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10216897280, jitterRate=-0.04847729206085205}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:18,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5d43f7ab14b181844b3bf8cd2694fc1b: 2023-07-12 05:17:18,919 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b., pid=93, masterSystemTime=1689139038902 2023-07-12 05:17:18,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,921 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:18,922 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:18,922 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139038922"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139038922"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139038922"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139038922"}]},"ts":"1689139038922"} 2023-07-12 05:17:18,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-12 05:17:18,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,46611,1689139023835 in 173 msec 2023-07-12 05:17:18,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-12 05:17:18,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, ASSIGN in 333 msec 2023-07-12 05:17:18,927 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:18,927 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139038927"}]},"ts":"1689139038927"} 2023-07-12 05:17:18,928 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-12 05:17:18,930 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:18,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 403 msec 2023-07-12 05:17:19,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 05:17:19,136 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-12 05:17:19,136 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-12 05:17:19,136 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:19,141 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-12 05:17:19,141 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:19,141 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-12 05:17:19,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:19,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:19,147 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:19,148 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-12 05:17:19,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 05:17:19,151 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,152 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:19,152 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:19,153 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:19,155 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:19,156 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,157 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 empty. 2023-07-12 05:17:19,157 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,157 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 05:17:19,175 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:19,176 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 360a64ea2c50d474721169a0a13fb282, NAME => 'GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:19,199 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:19,199 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 360a64ea2c50d474721169a0a13fb282, disabling compactions & flushes 2023-07-12 05:17:19,199 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,199 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,199 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. after waiting 0 ms 2023-07-12 05:17:19,199 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,199 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,199 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 360a64ea2c50d474721169a0a13fb282: 2023-07-12 05:17:19,202 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:19,203 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039203"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139039203"}]},"ts":"1689139039203"} 2023-07-12 05:17:19,205 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:19,206 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:19,206 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139039206"}]},"ts":"1689139039206"} 2023-07-12 05:17:19,207 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-12 05:17:19,210 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:19,211 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:19,211 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:19,211 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:19,211 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:19,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, ASSIGN}] 2023-07-12 05:17:19,214 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, ASSIGN 2023-07-12 05:17:19,214 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:19,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 05:17:19,365 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:19,366 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:19,367 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039366"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139039366"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139039366"}]},"ts":"1689139039366"} 2023-07-12 05:17:19,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:19,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 05:17:19,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 360a64ea2c50d474721169a0a13fb282, NAME => 'GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:19,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:19,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,528 INFO [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,530 DEBUG [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/f 2023-07-12 05:17:19,530 DEBUG [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/f 2023-07-12 05:17:19,530 INFO [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 360a64ea2c50d474721169a0a13fb282 columnFamilyName f 2023-07-12 05:17:19,531 INFO [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] regionserver.HStore(310): Store=360a64ea2c50d474721169a0a13fb282/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:19,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:19,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 360a64ea2c50d474721169a0a13fb282; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11275372000, jitterRate=0.05010084807872772}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:19,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 360a64ea2c50d474721169a0a13fb282: 2023-07-12 05:17:19,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282., pid=96, masterSystemTime=1689139039520 2023-07-12 05:17:19,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,541 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:19,541 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039541"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139039541"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139039541"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139039541"}]},"ts":"1689139039541"} 2023-07-12 05:17:19,545 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-12 05:17:19,545 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,44619,1689139024083 in 175 msec 2023-07-12 05:17:19,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-12 05:17:19,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, ASSIGN in 334 msec 2023-07-12 05:17:19,547 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:19,547 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139039547"}]},"ts":"1689139039547"} 2023-07-12 05:17:19,549 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-12 05:17:19,550 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:19,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 407 msec 2023-07-12 05:17:19,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 05:17:19,753 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-12 05:17:19,754 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-12 05:17:19,754 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:19,758 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-12 05:17:19,758 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:19,758 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-12 05:17:19,759 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:19,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 05:17:19,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:19,781 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 05:17:19,781 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:19,781 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:19,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:19,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:19,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,795 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 360a64ea2c50d474721169a0a13fb282 to RSGroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, REOPEN/MOVE 2023-07-12 05:17:19,796 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 5d43f7ab14b181844b3bf8cd2694fc1b to RSGroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:19,797 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, REOPEN/MOVE 2023-07-12 05:17:19,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, REOPEN/MOVE 2023-07-12 05:17:19,799 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:19,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_2083649399, current retry=0 2023-07-12 05:17:19,800 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, REOPEN/MOVE 2023-07-12 05:17:19,800 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039799"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139039799"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139039799"}]},"ts":"1689139039799"} 2023-07-12 05:17:19,802 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:19,802 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039802"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139039802"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139039802"}]},"ts":"1689139039802"} 2023-07-12 05:17:19,802 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:19,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:19,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 360a64ea2c50d474721169a0a13fb282, disabling compactions & flushes 2023-07-12 05:17:19,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. after waiting 0 ms 2023-07-12 05:17:19,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:19,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5d43f7ab14b181844b3bf8cd2694fc1b, disabling compactions & flushes 2023-07-12 05:17:19,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:19,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:19,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. after waiting 0 ms 2023-07-12 05:17:19,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:19,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:19,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:19,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 360a64ea2c50d474721169a0a13fb282: 2023-07-12 05:17:19,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 360a64ea2c50d474721169a0a13fb282 move to jenkins-hbase20.apache.org,35711,1689139024278 record at close sequenceid=2 2023-07-12 05:17:19,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:19,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:19,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5d43f7ab14b181844b3bf8cd2694fc1b: 2023-07-12 05:17:19,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 5d43f7ab14b181844b3bf8cd2694fc1b move to jenkins-hbase20.apache.org,35711,1689139024278 record at close sequenceid=2 2023-07-12 05:17:19,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:19,970 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=CLOSED 2023-07-12 05:17:19,971 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039970"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139039970"}]},"ts":"1689139039970"} 2023-07-12 05:17:19,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:19,974 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=CLOSED 2023-07-12 05:17:19,974 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139039974"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139039974"}]},"ts":"1689139039974"} 2023-07-12 05:17:19,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-12 05:17:19,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,44619,1689139024083 in 173 msec 2023-07-12 05:17:19,977 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:19,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-12 05:17:19,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,46611,1689139023835 in 170 msec 2023-07-12 05:17:19,978 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,35711,1689139024278; forceNewPlan=false, retain=false 2023-07-12 05:17:20,127 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:20,127 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:20,128 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139040127"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139040127"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139040127"}]},"ts":"1689139040127"} 2023-07-12 05:17:20,128 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139040127"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139040127"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139040127"}]},"ts":"1689139040127"} 2023-07-12 05:17:20,130 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=98, state=RUNNABLE; OpenRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:20,131 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=97, state=RUNNABLE; OpenRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:20,287 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d43f7ab14b181844b3bf8cd2694fc1b, NAME => 'GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:20,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:20,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,289 INFO [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,290 DEBUG [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/f 2023-07-12 05:17:20,290 DEBUG [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/f 2023-07-12 05:17:20,290 INFO [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d43f7ab14b181844b3bf8cd2694fc1b columnFamilyName f 2023-07-12 05:17:20,291 INFO [StoreOpener-5d43f7ab14b181844b3bf8cd2694fc1b-1] regionserver.HStore(310): Store=5d43f7ab14b181844b3bf8cd2694fc1b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:20,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5d43f7ab14b181844b3bf8cd2694fc1b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9587830240, jitterRate=-0.10706372559070587}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:20,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5d43f7ab14b181844b3bf8cd2694fc1b: 2023-07-12 05:17:20,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b., pid=101, masterSystemTime=1689139040283 2023-07-12 05:17:20,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:20,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 360a64ea2c50d474721169a0a13fb282, NAME => 'GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:20,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:20,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,304 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:20,304 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139040304"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139040304"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139040304"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139040304"}]},"ts":"1689139040304"} 2023-07-12 05:17:20,305 INFO [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,309 DEBUG [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/f 2023-07-12 05:17:20,309 DEBUG [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/f 2023-07-12 05:17:20,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=98 2023-07-12 05:17:20,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=98, state=SUCCESS; OpenRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,35711,1689139024278 in 176 msec 2023-07-12 05:17:20,309 INFO [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 360a64ea2c50d474721169a0a13fb282 columnFamilyName f 2023-07-12 05:17:20,310 INFO [StoreOpener-360a64ea2c50d474721169a0a13fb282-1] regionserver.HStore(310): Store=360a64ea2c50d474721169a0a13fb282/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:20,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, REOPEN/MOVE in 512 msec 2023-07-12 05:17:20,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:20,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 360a64ea2c50d474721169a0a13fb282; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9764406080, jitterRate=-0.09061881899833679}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:20,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 360a64ea2c50d474721169a0a13fb282: 2023-07-12 05:17:20,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282., pid=102, masterSystemTime=1689139040283 2023-07-12 05:17:20,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:20,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:20,320 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:20,320 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139040320"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139040320"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139040320"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139040320"}]},"ts":"1689139040320"} 2023-07-12 05:17:20,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=97 2023-07-12 05:17:20,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=97, state=SUCCESS; OpenRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,35711,1689139024278 in 190 msec 2023-07-12 05:17:20,324 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, REOPEN/MOVE in 528 msec 2023-07-12 05:17:20,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-12 05:17:20,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_2083649399. 2023-07-12 05:17:20,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:20,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:20,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:20,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 05:17:20,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:20,815 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 05:17:20,815 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:20,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:20,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:20,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_2083649399 2023-07-12 05:17:20,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:20,819 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-12 05:17:20,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveA 2023-07-12 05:17:20,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:20,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 05:17:20,823 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139040822"}]},"ts":"1689139040822"} 2023-07-12 05:17:20,824 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-12 05:17:20,825 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-12 05:17:20,826 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, UNASSIGN}] 2023-07-12 05:17:20,828 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, UNASSIGN 2023-07-12 05:17:20,829 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:20,829 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139040829"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139040829"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139040829"}]},"ts":"1689139040829"} 2023-07-12 05:17:20,831 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:20,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 05:17:20,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:20,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5d43f7ab14b181844b3bf8cd2694fc1b, disabling compactions & flushes 2023-07-12 05:17:20,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. after waiting 0 ms 2023-07-12 05:17:20,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:20,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b. 2023-07-12 05:17:20,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5d43f7ab14b181844b3bf8cd2694fc1b: 2023-07-12 05:17:21,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:21,001 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=5d43f7ab14b181844b3bf8cd2694fc1b, regionState=CLOSED 2023-07-12 05:17:21,001 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139041001"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139041001"}]},"ts":"1689139041001"} 2023-07-12 05:17:21,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-12 05:17:21,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 5d43f7ab14b181844b3bf8cd2694fc1b, server=jenkins-hbase20.apache.org,35711,1689139024278 in 171 msec 2023-07-12 05:17:21,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-12 05:17:21,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=5d43f7ab14b181844b3bf8cd2694fc1b, UNASSIGN in 178 msec 2023-07-12 05:17:21,007 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139041007"}]},"ts":"1689139041007"} 2023-07-12 05:17:21,011 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-12 05:17:21,012 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-12 05:17:21,015 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 195 msec 2023-07-12 05:17:21,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 05:17:21,126 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-12 05:17:21,127 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveA 2023-07-12 05:17:21,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:21,129 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:21,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_2083649399' 2023-07-12 05:17:21,130 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:21,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:21,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:21,134 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:21,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 05:17:21,136 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/recovered.edits] 2023-07-12 05:17:21,141 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b/recovered.edits/7.seqid 2023-07-12 05:17:21,142 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveA/5d43f7ab14b181844b3bf8cd2694fc1b 2023-07-12 05:17:21,142 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 05:17:21,145 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:21,147 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-12 05:17:21,148 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-12 05:17:21,150 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:21,150 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-12 05:17:21,150 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139041150"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:21,152 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 05:17:21,152 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5d43f7ab14b181844b3bf8cd2694fc1b, NAME => 'GrouptestMultiTableMoveA,,1689139038527.5d43f7ab14b181844b3bf8cd2694fc1b.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 05:17:21,152 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-12 05:17:21,152 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139041152"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:21,154 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-12 05:17:21,155 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 05:17:21,156 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 28 msec 2023-07-12 05:17:21,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 05:17:21,237 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-12 05:17:21,238 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-12 05:17:21,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveB 2023-07-12 05:17:21,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 05:17:21,245 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139041245"}]},"ts":"1689139041245"} 2023-07-12 05:17:21,247 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-12 05:17:21,248 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-12 05:17:21,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, UNASSIGN}] 2023-07-12 05:17:21,253 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, UNASSIGN 2023-07-12 05:17:21,254 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:21,254 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139041254"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139041254"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139041254"}]},"ts":"1689139041254"} 2023-07-12 05:17:21,256 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,35711,1689139024278}] 2023-07-12 05:17:21,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 05:17:21,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:21,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 360a64ea2c50d474721169a0a13fb282, disabling compactions & flushes 2023-07-12 05:17:21,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:21,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:21,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. after waiting 0 ms 2023-07-12 05:17:21,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:21,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:21,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282. 2023-07-12 05:17:21,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 360a64ea2c50d474721169a0a13fb282: 2023-07-12 05:17:21,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:21,417 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=360a64ea2c50d474721169a0a13fb282, regionState=CLOSED 2023-07-12 05:17:21,417 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689139041417"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139041417"}]},"ts":"1689139041417"} 2023-07-12 05:17:21,420 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-12 05:17:21,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 360a64ea2c50d474721169a0a13fb282, server=jenkins-hbase20.apache.org,35711,1689139024278 in 163 msec 2023-07-12 05:17:21,422 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-12 05:17:21,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=360a64ea2c50d474721169a0a13fb282, UNASSIGN in 171 msec 2023-07-12 05:17:21,423 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139041423"}]},"ts":"1689139041423"} 2023-07-12 05:17:21,425 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-12 05:17:21,426 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-12 05:17:21,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 188 msec 2023-07-12 05:17:21,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 05:17:21,548 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-12 05:17:21,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveB 2023-07-12 05:17:21,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,554 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_2083649399' 2023-07-12 05:17:21,555 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:21,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:21,560 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:21,563 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/recovered.edits] 2023-07-12 05:17:21,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 05:17:21,570 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/recovered.edits/7.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282/recovered.edits/7.seqid 2023-07-12 05:17:21,571 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/GrouptestMultiTableMoveB/360a64ea2c50d474721169a0a13fb282 2023-07-12 05:17:21,571 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 05:17:21,573 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,575 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-12 05:17:21,577 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-12 05:17:21,579 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,579 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-12 05:17:21,579 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139041579"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:21,581 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 05:17:21,581 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 360a64ea2c50d474721169a0a13fb282, NAME => 'GrouptestMultiTableMoveB,,1689139039143.360a64ea2c50d474721169a0a13fb282.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 05:17:21,581 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-12 05:17:21,581 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139041581"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:21,583 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-12 05:17:21,584 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 05:17:21,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 34 msec 2023-07-12 05:17:21,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 05:17:21,666 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-12 05:17:21,669 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,670 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:21,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:21,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:21,672 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35711] to rsgroup default 2023-07-12 05:17:21,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_2083649399 2023-07-12 05:17:21,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:21,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_2083649399, current retry=0 2023-07-12 05:17:21,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278] are moved back to Group_testMultiTableMove_2083649399 2023-07-12 05:17:21,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_2083649399 => default 2023-07-12 05:17:21,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:21,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testMultiTableMove_2083649399 2023-07-12 05:17:21,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:21,703 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:21,704 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:21,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:21,704 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:21,705 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:21,705 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:21,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:21,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:21,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:21,714 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:21,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:21,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:21,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:21,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:21,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:21,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 507 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140241726, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:21,727 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:21,728 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:21,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,729 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:21,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:21,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,745 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=502 (was 503), OpenFileDescriptor=776 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=573 (was 606), ProcessCount=170 (was 170), AvailableMemoryMB=3076 (was 3093) 2023-07-12 05:17:21,745 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 05:17:21,761 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=502, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=573, ProcessCount=170, AvailableMemoryMB=3075 2023-07-12 05:17:21,761 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 05:17:21,761 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-12 05:17:21,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:21,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:21,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:21,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:21,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:21,770 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:21,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:21,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:21,778 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:21,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:21,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:21,795 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:21,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:21,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:21,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 535 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140241803, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:21,804 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:21,806 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:21,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,807 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:21,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:21,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:21,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldGroup 2023-07-12 05:17:21,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:21,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:21,818 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:21,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup oldGroup 2023-07-12 05:17:21,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:21,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:21,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 05:17:21,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to default 2023-07-12 05:17:21,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-12 05:17:21,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:21,953 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,953 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 05:17:21,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 05:17:21,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:21,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,958 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup anotherRSGroup 2023-07-12 05:17:21,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 05:17:21,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:21,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:21,974 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:21,979 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,979 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,983 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44619] to rsgroup anotherRSGroup 2023-07-12 05:17:21,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:21,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 05:17:21,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:21,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:21,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:21,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 05:17:21,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44619,1689139024083] are moved back to default 2023-07-12 05:17:21,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-12 05:17:21,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:21,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:21,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:21,994 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 05:17:21,994 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:21,995 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 05:17:21,995 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:22,001 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-12 05:17:22,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:22,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 569 service: MasterService methodName: ExecMasterService size: 113 connection: 148.251.75.209:54108 deadline: 1689140242000, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-12 05:17:22,003 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to anotherRSGroup 2023-07-12 05:17:22,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:22,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 106 connection: 148.251.75.209:54108 deadline: 1689140242003, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-12 05:17:22,004 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from default to newRSGroup2 2023-07-12 05:17:22,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:22,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 102 connection: 148.251.75.209:54108 deadline: 1689140242004, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-12 05:17:22,005 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to default 2023-07-12 05:17:22,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:22,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 99 connection: 148.251.75.209:54108 deadline: 1689140242005, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-12 05:17:22,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:22,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:22,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:22,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44619] to rsgroup default 2023-07-12 05:17:22,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 05:17:22,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:22,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:22,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-12 05:17:22,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44619,1689139024083] are moved back to anotherRSGroup 2023-07-12 05:17:22,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-12 05:17:22,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:22,015 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup anotherRSGroup 2023-07-12 05:17:22,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:22,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 05:17:22,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:22,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:22,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:22,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:22,027 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup default 2023-07-12 05:17:22,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 05:17:22,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:22,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-12 05:17:22,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to oldGroup 2023-07-12 05:17:22,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-12 05:17:22,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:22,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup oldGroup 2023-07-12 05:17:22,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:22,045 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:22,046 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:22,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:22,046 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:22,046 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:22,046 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:22,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:22,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:22,052 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:22,056 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:22,057 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:22,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:22,078 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:22,082 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,082 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,085 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:22,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:22,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 611 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140242084, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:22,085 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:22,087 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:22,088 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,088 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,088 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:22,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:22,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:22,103 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=505 (was 502) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=573 (was 573), ProcessCount=170 (was 170), AvailableMemoryMB=3003 (was 3075) 2023-07-12 05:17:22,103 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 05:17:22,118 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=505, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=573, ProcessCount=170, AvailableMemoryMB=3002 2023-07-12 05:17:22,118 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 05:17:22,118 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-12 05:17:22,123 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,123 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,124 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:22,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:22,124 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:22,125 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:22,125 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:22,126 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:22,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:22,131 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:22,134 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:22,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:22,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:22,146 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:22,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:22,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:22,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 639 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140242151, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:22,152 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:22,154 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:22,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,155 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:22,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:22,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:22,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:22,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:22,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldgroup 2023-07-12 05:17:22,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:22,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:22,163 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:22,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,169 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup oldgroup 2023-07-12 05:17:22,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:22,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:22,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 05:17:22,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to default 2023-07-12 05:17:22,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-12 05:17:22,179 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:22,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:22,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:22,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 05:17:22,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:22,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:22,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-12 05:17:22,190 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:22,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-12 05:17:22,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 05:17:22,192 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:22,192 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,193 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,193 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:22,200 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:22,202 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,202 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/testRename/797b32715a69cd102e216d93a59580cb empty. 2023-07-12 05:17:22,203 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,203 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-12 05:17:22,221 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:22,222 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 797b32715a69cd102e216d93a59580cb, NAME => 'testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:22,233 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:22,234 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 797b32715a69cd102e216d93a59580cb, disabling compactions & flushes 2023-07-12 05:17:22,234 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,234 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,234 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. after waiting 0 ms 2023-07-12 05:17:22,234 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,234 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,234 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:22,236 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:22,237 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139042237"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139042237"}]},"ts":"1689139042237"} 2023-07-12 05:17:22,238 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:22,239 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:22,239 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139042239"}]},"ts":"1689139042239"} 2023-07-12 05:17:22,240 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-12 05:17:22,243 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:22,243 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:22,243 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:22,243 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:22,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, ASSIGN}] 2023-07-12 05:17:22,245 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, ASSIGN 2023-07-12 05:17:22,246 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:22,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 05:17:22,396 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:22,398 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:22,398 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139042398"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139042398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139042398"}]},"ts":"1689139042398"} 2023-07-12 05:17:22,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:22,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 05:17:22,547 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 05:17:22,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 797b32715a69cd102e216d93a59580cb, NAME => 'testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:22,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:22,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,561 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,563 DEBUG [StoreOpener-797b32715a69cd102e216d93a59580cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/tr 2023-07-12 05:17:22,563 DEBUG [StoreOpener-797b32715a69cd102e216d93a59580cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/tr 2023-07-12 05:17:22,563 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 797b32715a69cd102e216d93a59580cb columnFamilyName tr 2023-07-12 05:17:22,564 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] regionserver.HStore(310): Store=797b32715a69cd102e216d93a59580cb/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:22,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:22,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 797b32715a69cd102e216d93a59580cb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10979425440, jitterRate=0.022538676857948303}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:22,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:22,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689139042186.797b32715a69cd102e216d93a59580cb., pid=113, masterSystemTime=1689139042552 2023-07-12 05:17:22,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,576 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:22,576 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139042576"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139042576"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139042576"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139042576"}]},"ts":"1689139042576"} 2023-07-12 05:17:22,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-12 05:17:22,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,44619,1689139024083 in 177 msec 2023-07-12 05:17:22,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-12 05:17:22,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, ASSIGN in 336 msec 2023-07-12 05:17:22,582 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:22,583 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139042583"}]},"ts":"1689139042583"} 2023-07-12 05:17:22,584 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-12 05:17:22,587 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:22,588 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 401 msec 2023-07-12 05:17:22,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 05:17:22,796 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-12 05:17:22,797 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-12 05:17:22,797 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:22,804 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-12 05:17:22,804 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:22,804 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-12 05:17:22,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup oldgroup 2023-07-12 05:17:22,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:22,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:22,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:22,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:22,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-12 05:17:22,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 797b32715a69cd102e216d93a59580cb to RSGroup oldgroup 2023-07-12 05:17:22,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:22,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:22,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:22,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:22,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:22,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE 2023-07-12 05:17:22,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-12 05:17:22,813 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE 2023-07-12 05:17:22,814 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:22,814 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139042814"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139042814"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139042814"}]},"ts":"1689139042814"} 2023-07-12 05:17:22,815 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:22,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 797b32715a69cd102e216d93a59580cb, disabling compactions & flushes 2023-07-12 05:17:22,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. after waiting 0 ms 2023-07-12 05:17:22,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:22,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:22,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:22,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 797b32715a69cd102e216d93a59580cb move to jenkins-hbase20.apache.org,38695,1689139027905 record at close sequenceid=2 2023-07-12 05:17:22,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:22,982 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=CLOSED 2023-07-12 05:17:22,982 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139042982"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139042982"}]},"ts":"1689139042982"} 2023-07-12 05:17:22,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 05:17:22,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,44619,1689139024083 in 168 msec 2023-07-12 05:17:22,985 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38695,1689139027905; forceNewPlan=false, retain=false 2023-07-12 05:17:23,135 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:23,136 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:23,136 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139043136"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139043136"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139043136"}]},"ts":"1689139043136"} 2023-07-12 05:17:23,138 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:23,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:23,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 797b32715a69cd102e216d93a59580cb, NAME => 'testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:23,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:23,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,298 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,299 DEBUG [StoreOpener-797b32715a69cd102e216d93a59580cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/tr 2023-07-12 05:17:23,299 DEBUG [StoreOpener-797b32715a69cd102e216d93a59580cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/tr 2023-07-12 05:17:23,299 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 797b32715a69cd102e216d93a59580cb columnFamilyName tr 2023-07-12 05:17:23,300 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] regionserver.HStore(310): Store=797b32715a69cd102e216d93a59580cb/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:23,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:23,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 797b32715a69cd102e216d93a59580cb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10280525120, jitterRate=-0.04255148768424988}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:23,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:23,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689139042186.797b32715a69cd102e216d93a59580cb., pid=116, masterSystemTime=1689139043290 2023-07-12 05:17:23,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:23,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:23,307 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:23,307 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139043307"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139043307"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139043307"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139043307"}]},"ts":"1689139043307"} 2023-07-12 05:17:23,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-12 05:17:23,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,38695,1689139027905 in 170 msec 2023-07-12 05:17:23,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE in 498 msec 2023-07-12 05:17:23,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-12 05:17:23,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-12 05:17:23,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:23,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:23,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:23,820 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:23,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-12 05:17:23,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:23,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 05:17:23,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:23,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-12 05:17:23,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:23,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:23,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:23,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup normal 2023-07-12 05:17:23,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:23,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:23,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:23,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:23,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:23,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:23,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:23,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:23,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44619] to rsgroup normal 2023-07-12 05:17:23,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:23,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:23,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:23,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:23,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:23,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 05:17:23,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44619,1689139024083] are moved back to default 2023-07-12 05:17:23,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-12 05:17:23,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:23,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:23,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:23,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-12 05:17:23,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:23,856 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:23,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-12 05:17:23,859 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:23,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-12 05:17:23,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 05:17:23,862 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:23,862 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:23,863 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:23,863 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:23,863 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:23,865 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:23,867 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:23,868 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 empty. 2023-07-12 05:17:23,868 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:23,868 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-12 05:17:23,884 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:23,886 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => e3499c438782e9645a7a2e6435450c64, NAME => 'unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:23,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:23,916 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing e3499c438782e9645a7a2e6435450c64, disabling compactions & flushes 2023-07-12 05:17:23,916 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:23,916 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:23,916 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. after waiting 0 ms 2023-07-12 05:17:23,916 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:23,916 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:23,916 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:23,918 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:23,919 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139043919"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139043919"}]},"ts":"1689139043919"} 2023-07-12 05:17:23,921 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:23,922 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:23,922 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139043922"}]},"ts":"1689139043922"} 2023-07-12 05:17:23,923 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-12 05:17:23,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, ASSIGN}] 2023-07-12 05:17:23,927 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, ASSIGN 2023-07-12 05:17:23,928 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:23,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 05:17:24,079 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:24,079 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139044079"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139044079"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139044079"}]},"ts":"1689139044079"} 2023-07-12 05:17:24,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:24,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 05:17:24,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3499c438782e9645a7a2e6435450c64, NAME => 'unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:24,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:24,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,238 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,240 DEBUG [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/ut 2023-07-12 05:17:24,240 DEBUG [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/ut 2023-07-12 05:17:24,240 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3499c438782e9645a7a2e6435450c64 columnFamilyName ut 2023-07-12 05:17:24,241 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] regionserver.HStore(310): Store=e3499c438782e9645a7a2e6435450c64/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:24,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:24,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e3499c438782e9645a7a2e6435450c64; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11252671040, jitterRate=0.04798665642738342}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:24,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:24,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64., pid=119, masterSystemTime=1689139044232 2023-07-12 05:17:24,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,250 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:24,250 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139044250"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139044250"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139044250"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139044250"}]},"ts":"1689139044250"} 2023-07-12 05:17:24,253 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-12 05:17:24,253 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,46611,1689139023835 in 170 msec 2023-07-12 05:17:24,255 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 05:17:24,255 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, ASSIGN in 328 msec 2023-07-12 05:17:24,255 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:24,256 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139044255"}]},"ts":"1689139044255"} 2023-07-12 05:17:24,257 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-12 05:17:24,259 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:24,260 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 403 msec 2023-07-12 05:17:24,314 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-12 05:17:24,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 05:17:24,466 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-12 05:17:24,467 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-12 05:17:24,468 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:24,473 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-12 05:17:24,473 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:24,473 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-12 05:17:24,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup normal 2023-07-12 05:17:24,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 05:17:24,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:24,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:24,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:24,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:24,482 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-12 05:17:24,482 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region e3499c438782e9645a7a2e6435450c64 to RSGroup normal 2023-07-12 05:17:24,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE 2023-07-12 05:17:24,483 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-12 05:17:24,483 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE 2023-07-12 05:17:24,484 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:24,484 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139044484"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139044484"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139044484"}]},"ts":"1689139044484"} 2023-07-12 05:17:24,485 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:24,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e3499c438782e9645a7a2e6435450c64, disabling compactions & flushes 2023-07-12 05:17:24,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. after waiting 0 ms 2023-07-12 05:17:24,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:24,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:24,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding e3499c438782e9645a7a2e6435450c64 move to jenkins-hbase20.apache.org,44619,1689139024083 record at close sequenceid=2 2023-07-12 05:17:24,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,646 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=CLOSED 2023-07-12 05:17:24,646 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139044646"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139044646"}]},"ts":"1689139044646"} 2023-07-12 05:17:24,651 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-12 05:17:24,651 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,46611,1689139023835 in 162 msec 2023-07-12 05:17:24,651 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:24,802 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:24,802 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139044802"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139044802"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139044802"}]},"ts":"1689139044802"} 2023-07-12 05:17:24,805 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:24,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3499c438782e9645a7a2e6435450c64, NAME => 'unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:24,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:24,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,964 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,965 DEBUG [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/ut 2023-07-12 05:17:24,965 DEBUG [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/ut 2023-07-12 05:17:24,965 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3499c438782e9645a7a2e6435450c64 columnFamilyName ut 2023-07-12 05:17:24,966 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] regionserver.HStore(310): Store=e3499c438782e9645a7a2e6435450c64/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:24,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:24,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e3499c438782e9645a7a2e6435450c64; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11257199840, jitterRate=0.04840843379497528}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:24,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:24,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64., pid=122, masterSystemTime=1689139044957 2023-07-12 05:17:24,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:24,974 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:24,974 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139044974"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139044974"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139044974"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139044974"}]},"ts":"1689139044974"} 2023-07-12 05:17:24,976 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-12 05:17:24,976 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,44619,1689139024083 in 170 msec 2023-07-12 05:17:24,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE in 494 msec 2023-07-12 05:17:25,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-12 05:17:25,483 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-12 05:17:25,484 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:25,488 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:25,488 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:25,490 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:25,491 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 05:17:25,491 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:25,492 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-12 05:17:25,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:25,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 05:17:25,494 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:25,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldgroup to newgroup 2023-07-12 05:17:25,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:25,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:25,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:25,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:25,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-12 05:17:25,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RenameRSGroup 2023-07-12 05:17:25,503 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:25,503 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:25,506 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=newgroup 2023-07-12 05:17:25,506 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:25,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-12 05:17:25,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:25,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 05:17:25,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:25,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:25,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:25,515 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup default 2023-07-12 05:17:25,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:25,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:25,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:25,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:25,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:25,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-12 05:17:25,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region e3499c438782e9645a7a2e6435450c64 to RSGroup default 2023-07-12 05:17:25,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE 2023-07-12 05:17:25,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 05:17:25,523 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE 2023-07-12 05:17:25,524 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:25,525 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139045524"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139045524"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139045524"}]},"ts":"1689139045524"} 2023-07-12 05:17:25,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:25,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:25,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e3499c438782e9645a7a2e6435450c64, disabling compactions & flushes 2023-07-12 05:17:25,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:25,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:25,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. after waiting 0 ms 2023-07-12 05:17:25,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:25,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:25,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:25,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:25,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding e3499c438782e9645a7a2e6435450c64 move to jenkins-hbase20.apache.org,46611,1689139023835 record at close sequenceid=5 2023-07-12 05:17:25,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:25,693 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=CLOSED 2023-07-12 05:17:25,693 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139045693"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139045693"}]},"ts":"1689139045693"} 2023-07-12 05:17:25,696 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-12 05:17:25,696 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,44619,1689139024083 in 164 msec 2023-07-12 05:17:25,697 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:25,847 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:25,848 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139045847"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139045847"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139045847"}]},"ts":"1689139045847"} 2023-07-12 05:17:25,849 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:26,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:26,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3499c438782e9645a7a2e6435450c64, NAME => 'unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:26,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:26,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,006 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,007 DEBUG [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/ut 2023-07-12 05:17:26,007 DEBUG [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/ut 2023-07-12 05:17:26,008 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3499c438782e9645a7a2e6435450c64 columnFamilyName ut 2023-07-12 05:17:26,008 INFO [StoreOpener-e3499c438782e9645a7a2e6435450c64-1] regionserver.HStore(310): Store=e3499c438782e9645a7a2e6435450c64/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:26,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:26,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e3499c438782e9645a7a2e6435450c64; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10980482880, jitterRate=0.022637158632278442}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:26,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:26,014 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64., pid=125, masterSystemTime=1689139046001 2023-07-12 05:17:26,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:26,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:26,016 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=e3499c438782e9645a7a2e6435450c64, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:26,016 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689139046016"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139046016"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139046016"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139046016"}]},"ts":"1689139046016"} 2023-07-12 05:17:26,020 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-12 05:17:26,020 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure e3499c438782e9645a7a2e6435450c64, server=jenkins-hbase20.apache.org,46611,1689139023835 in 168 msec 2023-07-12 05:17:26,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=e3499c438782e9645a7a2e6435450c64, REOPEN/MOVE in 498 msec 2023-07-12 05:17:26,189 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-12 05:17:26,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-12 05:17:26,524 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-12 05:17:26,524 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:26,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44619] to rsgroup default 2023-07-12 05:17:26,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 05:17:26,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:26,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:26,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:26,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:26,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-12 05:17:26,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44619,1689139024083] are moved back to normal 2023-07-12 05:17:26,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-12 05:17:26,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:26,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup normal 2023-07-12 05:17:26,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:26,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:26,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:26,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 05:17:26,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:26,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:26,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:26,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:26,552 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:26,552 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:26,553 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:26,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:26,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:26,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:26,559 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:26,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup default 2023-07-12 05:17:26,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:26,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:26,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:26,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-12 05:17:26,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(345): Moving region 797b32715a69cd102e216d93a59580cb to RSGroup default 2023-07-12 05:17:26,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE 2023-07-12 05:17:26,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 05:17:26,566 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE 2023-07-12 05:17:26,567 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:26,567 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139046567"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139046567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139046567"}]},"ts":"1689139046567"} 2023-07-12 05:17:26,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,38695,1689139027905}] 2023-07-12 05:17:26,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:26,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 797b32715a69cd102e216d93a59580cb, disabling compactions & flushes 2023-07-12 05:17:26,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:26,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:26,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. after waiting 0 ms 2023-07-12 05:17:26,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:26,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 05:17:26,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:26,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:26,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 797b32715a69cd102e216d93a59580cb move to jenkins-hbase20.apache.org,44619,1689139024083 record at close sequenceid=5 2023-07-12 05:17:26,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:26,730 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=CLOSED 2023-07-12 05:17:26,730 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139046729"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139046729"}]},"ts":"1689139046729"} 2023-07-12 05:17:26,732 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-12 05:17:26,732 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,38695,1689139027905 in 163 msec 2023-07-12 05:17:26,733 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:26,883 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:26,883 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:26,883 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139046883"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139046883"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139046883"}]},"ts":"1689139046883"} 2023-07-12 05:17:26,885 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:27,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:27,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 797b32715a69cd102e216d93a59580cb, NAME => 'testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:27,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:27,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,043 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,044 DEBUG [StoreOpener-797b32715a69cd102e216d93a59580cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/tr 2023-07-12 05:17:27,044 DEBUG [StoreOpener-797b32715a69cd102e216d93a59580cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/tr 2023-07-12 05:17:27,045 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 797b32715a69cd102e216d93a59580cb columnFamilyName tr 2023-07-12 05:17:27,046 INFO [StoreOpener-797b32715a69cd102e216d93a59580cb-1] regionserver.HStore(310): Store=797b32715a69cd102e216d93a59580cb/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:27,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:27,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 797b32715a69cd102e216d93a59580cb; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10600473760, jitterRate=-0.01275394856929779}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:27,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:27,052 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689139042186.797b32715a69cd102e216d93a59580cb., pid=128, masterSystemTime=1689139047037 2023-07-12 05:17:27,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:27,054 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:27,055 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=797b32715a69cd102e216d93a59580cb, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:27,055 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689139042186.797b32715a69cd102e216d93a59580cb.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689139047054"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139047054"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139047054"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139047054"}]},"ts":"1689139047054"} 2023-07-12 05:17:27,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-12 05:17:27,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 797b32715a69cd102e216d93a59580cb, server=jenkins-hbase20.apache.org,44619,1689139024083 in 172 msec 2023-07-12 05:17:27,062 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=797b32715a69cd102e216d93a59580cb, REOPEN/MOVE in 496 msec 2023-07-12 05:17:27,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-12 05:17:27,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-12 05:17:27,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:27,568 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup default 2023-07-12 05:17:27,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 05:17:27,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:27,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-12 05:17:27,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to newgroup 2023-07-12 05:17:27,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-12 05:17:27,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:27,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup newgroup 2023-07-12 05:17:27,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:27,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:27,594 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:27,596 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:27,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:27,604 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:27,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:27,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 759 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140247621, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:27,622 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:27,624 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:27,625 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,625 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,625 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:27,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:27,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,642 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=498 (was 505), OpenFileDescriptor=757 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=527 (was 573), ProcessCount=167 (was 170), AvailableMemoryMB=3641 (was 3002) - AvailableMemoryMB LEAK? - 2023-07-12 05:17:27,659 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=498, OpenFileDescriptor=757, MaxFileDescriptor=60000, SystemLoadAverage=527, ProcessCount=167, AvailableMemoryMB=3641 2023-07-12 05:17:27,659 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-12 05:17:27,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,664 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:27,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:27,664 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:27,665 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:27,665 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:27,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:27,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:27,673 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:27,675 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:27,676 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:27,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:27,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:27,703 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,703 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:27,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 787 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140247708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:27,709 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:27,711 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:27,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,713 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:27,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:27,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=nonexistent 2023-07-12 05:17:27,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:27,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, server=bogus:123 2023-07-12 05:17:27,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-12 05:17:27,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bogus 2023-07-12 05:17:27,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bogus 2023-07-12 05:17:27,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 799 service: MasterService methodName: ExecMasterService size: 87 connection: 148.251.75.209:54108 deadline: 1689140247725, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-12 05:17:27,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [bogus:123] to rsgroup bogus 2023-07-12 05:17:27,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 96 connection: 148.251.75.209:54108 deadline: 1689140247727, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 05:17:27,729 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 05:17:27,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=true 2023-07-12 05:17:27,734 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//148.251.75.209 balance rsgroup, group=bogus 2023-07-12 05:17:27,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 88 connection: 148.251.75.209:54108 deadline: 1689140247733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 05:17:27,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:27,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:27,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:27,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:27,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:27,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:27,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:27,745 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:27,747 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:27,748 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:27,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:27,761 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:27,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:27,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 830 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140247769, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:27,773 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:27,774 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:27,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,776 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,794 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=502 (was 498) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x57a3c224-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=757 (was 757), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=527 (was 527), ProcessCount=167 (was 167), AvailableMemoryMB=3636 (was 3641) 2023-07-12 05:17:27,794 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 05:17:27,812 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502, OpenFileDescriptor=757, MaxFileDescriptor=60000, SystemLoadAverage=527, ProcessCount=167, AvailableMemoryMB=3633 2023-07-12 05:17:27,812 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 05:17:27,812 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-12 05:17:27,818 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,818 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:27,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:27,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:27,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:27,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:27,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:27,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:27,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:27,834 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:27,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:27,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:27,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:27,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:27,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:27,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 858 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140247857, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:27,858 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:27,861 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:27,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,862 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:27,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:27,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:27,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:27,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:27,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 05:17:27,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to default 2023-07-12 05:17:27,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:27,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:27,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:27,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:27,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:27,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:27,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:27,918 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-12 05:17:27,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 05:17:27,922 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:27,924 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:27,925 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:27,925 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:27,937 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:27,943 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:27,944 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:27,944 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:27,944 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:27,944 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:27,945 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 empty. 2023-07-12 05:17:27,945 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 empty. 2023-07-12 05:17:27,945 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a empty. 2023-07-12 05:17:27,945 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa empty. 2023-07-12 05:17:27,945 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 empty. 2023-07-12 05:17:27,946 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:27,946 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:27,946 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:27,946 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:27,946 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:27,946 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 05:17:27,974 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:27,977 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 2306e4cf4ae9e28f0b8efdcbf67eee16, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:27,977 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => ac72c3269fa1b5a76e921940512e5a1a, NAME => 'Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:27,977 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 4f872d2cb9f686d856b506bce7c783e2, NAME => 'Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:28,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 05:17:28,047 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,047 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing ac72c3269fa1b5a76e921940512e5a1a, disabling compactions & flushes 2023-07-12 05:17:28,047 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,047 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,047 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. after waiting 0 ms 2023-07-12 05:17:28,047 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,047 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,047 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for ac72c3269fa1b5a76e921940512e5a1a: 2023-07-12 05:17:28,047 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6f92af2c815370ac62101af5d43afa34, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:28,064 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 05:17:28,066 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,066 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 6f92af2c815370ac62101af5d43afa34, disabling compactions & flushes 2023-07-12 05:17:28,066 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. after waiting 0 ms 2023-07-12 05:17:28,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,067 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 6f92af2c815370ac62101af5d43afa34: 2023-07-12 05:17:28,067 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 17ed68757f9ad3bcb8e029f38ef97ffa, NAME => 'Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp 2023-07-12 05:17:28,126 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,126 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 17ed68757f9ad3bcb8e029f38ef97ffa, disabling compactions & flushes 2023-07-12 05:17:28,126 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,126 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,126 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. after waiting 0 ms 2023-07-12 05:17:28,126 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,126 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,126 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 17ed68757f9ad3bcb8e029f38ef97ffa: 2023-07-12 05:17:28,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 05:17:28,447 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,447 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 4f872d2cb9f686d856b506bce7c783e2, disabling compactions & flushes 2023-07-12 05:17:28,447 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,448 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,448 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. after waiting 0 ms 2023-07-12 05:17:28,448 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,448 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,448 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 4f872d2cb9f686d856b506bce7c783e2: 2023-07-12 05:17:28,449 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,449 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 2306e4cf4ae9e28f0b8efdcbf67eee16, disabling compactions & flushes 2023-07-12 05:17:28,449 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,449 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,449 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. after waiting 0 ms 2023-07-12 05:17:28,450 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,450 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,450 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 2306e4cf4ae9e28f0b8efdcbf67eee16: 2023-07-12 05:17:28,453 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:28,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139048454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139048454"}]},"ts":"1689139048454"} 2023-07-12 05:17:28,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139048454"}]},"ts":"1689139048454"} 2023-07-12 05:17:28,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139048454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139048454"}]},"ts":"1689139048454"} 2023-07-12 05:17:28,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139048454"}]},"ts":"1689139048454"} 2023-07-12 05:17:28,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139048454"}]},"ts":"1689139048454"} 2023-07-12 05:17:28,457 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 05:17:28,458 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:28,458 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139048458"}]},"ts":"1689139048458"} 2023-07-12 05:17:28,460 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-12 05:17:28,462 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:28,462 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:28,462 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:28,462 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:28,463 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, ASSIGN}] 2023-07-12 05:17:28,466 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, ASSIGN 2023-07-12 05:17:28,466 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, ASSIGN 2023-07-12 05:17:28,466 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, ASSIGN 2023-07-12 05:17:28,466 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, ASSIGN 2023-07-12 05:17:28,467 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, ASSIGN 2023-07-12 05:17:28,467 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:28,467 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44619,1689139024083; forceNewPlan=false, retain=false 2023-07-12 05:17:28,467 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:28,468 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:28,469 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46611,1689139023835; forceNewPlan=false, retain=false 2023-07-12 05:17:28,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 05:17:28,618 INFO [jenkins-hbase20:41085] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 05:17:28,621 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=2306e4cf4ae9e28f0b8efdcbf67eee16, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:28,621 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=6f92af2c815370ac62101af5d43afa34, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:28,622 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139048621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139048621"}]},"ts":"1689139048621"} 2023-07-12 05:17:28,621 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=4f872d2cb9f686d856b506bce7c783e2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:28,621 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=ac72c3269fa1b5a76e921940512e5a1a, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:28,622 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139048621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139048621"}]},"ts":"1689139048621"} 2023-07-12 05:17:28,622 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139048621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139048621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139048621"}]},"ts":"1689139048621"} 2023-07-12 05:17:28,621 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=17ed68757f9ad3bcb8e029f38ef97ffa, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:28,622 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139048621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139048621"}]},"ts":"1689139048621"} 2023-07-12 05:17:28,622 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139048621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139048621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139048621"}]},"ts":"1689139048621"} 2023-07-12 05:17:28,624 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=132, state=RUNNABLE; OpenRegionProcedure 2306e4cf4ae9e28f0b8efdcbf67eee16, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:28,627 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=131, state=RUNNABLE; OpenRegionProcedure 4f872d2cb9f686d856b506bce7c783e2, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:28,628 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=130, state=RUNNABLE; OpenRegionProcedure ac72c3269fa1b5a76e921940512e5a1a, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:28,630 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=133, state=RUNNABLE; OpenRegionProcedure 6f92af2c815370ac62101af5d43afa34, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:28,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure 17ed68757f9ad3bcb8e029f38ef97ffa, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:28,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4f872d2cb9f686d856b506bce7c783e2, NAME => 'Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 05:17:28,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17ed68757f9ad3bcb8e029f38ef97ffa, NAME => 'Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 05:17:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,787 INFO [StoreOpener-4f872d2cb9f686d856b506bce7c783e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,787 INFO [StoreOpener-17ed68757f9ad3bcb8e029f38ef97ffa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,796 DEBUG [StoreOpener-17ed68757f9ad3bcb8e029f38ef97ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/f 2023-07-12 05:17:28,796 DEBUG [StoreOpener-17ed68757f9ad3bcb8e029f38ef97ffa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/f 2023-07-12 05:17:28,797 INFO [StoreOpener-17ed68757f9ad3bcb8e029f38ef97ffa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17ed68757f9ad3bcb8e029f38ef97ffa columnFamilyName f 2023-07-12 05:17:28,797 INFO [StoreOpener-17ed68757f9ad3bcb8e029f38ef97ffa-1] regionserver.HStore(310): Store=17ed68757f9ad3bcb8e029f38ef97ffa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:28,798 DEBUG [StoreOpener-4f872d2cb9f686d856b506bce7c783e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/f 2023-07-12 05:17:28,798 DEBUG [StoreOpener-4f872d2cb9f686d856b506bce7c783e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/f 2023-07-12 05:17:28,798 INFO [StoreOpener-4f872d2cb9f686d856b506bce7c783e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4f872d2cb9f686d856b506bce7c783e2 columnFamilyName f 2023-07-12 05:17:28,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,799 INFO [StoreOpener-4f872d2cb9f686d856b506bce7c783e2-1] regionserver.HStore(310): Store=4f872d2cb9f686d856b506bce7c783e2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:28,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:28,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:28,809 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 17ed68757f9ad3bcb8e029f38ef97ffa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11988910400, jitterRate=0.11655429005622864}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 17ed68757f9ad3bcb8e029f38ef97ffa: 2023-07-12 05:17:28,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 4f872d2cb9f686d856b506bce7c783e2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11031585920, jitterRate=0.02739650011062622}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:28,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 4f872d2cb9f686d856b506bce7c783e2: 2023-07-12 05:17:28,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa., pid=139, masterSystemTime=1689139048778 2023-07-12 05:17:28,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,813 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:28,813 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2306e4cf4ae9e28f0b8efdcbf67eee16, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 05:17:28,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,815 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=17ed68757f9ad3bcb8e029f38ef97ffa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:28,815 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139048815"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139048815"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139048815"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139048815"}]},"ts":"1689139048815"} 2023-07-12 05:17:28,819 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2., pid=136, masterSystemTime=1689139048780 2023-07-12 05:17:28,820 INFO [StoreOpener-2306e4cf4ae9e28f0b8efdcbf67eee16-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:28,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6f92af2c815370ac62101af5d43afa34, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 05:17:28,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-12 05:17:28,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure 17ed68757f9ad3bcb8e029f38ef97ffa, server=jenkins-hbase20.apache.org,46611,1689139023835 in 185 msec 2023-07-12 05:17:28,823 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=4f872d2cb9f686d856b506bce7c783e2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:28,823 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048823"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139048823"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139048823"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139048823"}]},"ts":"1689139048823"} 2023-07-12 05:17:28,830 DEBUG [StoreOpener-2306e4cf4ae9e28f0b8efdcbf67eee16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/f 2023-07-12 05:17:28,831 DEBUG [StoreOpener-2306e4cf4ae9e28f0b8efdcbf67eee16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/f 2023-07-12 05:17:28,831 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, ASSIGN in 360 msec 2023-07-12 05:17:28,832 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=131 2023-07-12 05:17:28,832 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=131, state=SUCCESS; OpenRegionProcedure 4f872d2cb9f686d856b506bce7c783e2, server=jenkins-hbase20.apache.org,44619,1689139024083 in 199 msec 2023-07-12 05:17:28,832 INFO [StoreOpener-2306e4cf4ae9e28f0b8efdcbf67eee16-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2306e4cf4ae9e28f0b8efdcbf67eee16 columnFamilyName f 2023-07-12 05:17:28,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, ASSIGN in 369 msec 2023-07-12 05:17:28,835 INFO [StoreOpener-2306e4cf4ae9e28f0b8efdcbf67eee16-1] regionserver.HStore(310): Store=2306e4cf4ae9e28f0b8efdcbf67eee16/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:28,843 INFO [StoreOpener-6f92af2c815370ac62101af5d43afa34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,845 DEBUG [StoreOpener-6f92af2c815370ac62101af5d43afa34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/f 2023-07-12 05:17:28,845 DEBUG [StoreOpener-6f92af2c815370ac62101af5d43afa34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/f 2023-07-12 05:17:28,846 INFO [StoreOpener-6f92af2c815370ac62101af5d43afa34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f92af2c815370ac62101af5d43afa34 columnFamilyName f 2023-07-12 05:17:28,847 INFO [StoreOpener-6f92af2c815370ac62101af5d43afa34-1] regionserver.HStore(310): Store=6f92af2c815370ac62101af5d43afa34/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:28,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:28,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:28,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:28,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:28,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2306e4cf4ae9e28f0b8efdcbf67eee16; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9645801920, jitterRate=-0.10166469216346741}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:28,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2306e4cf4ae9e28f0b8efdcbf67eee16: 2023-07-12 05:17:28,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 6f92af2c815370ac62101af5d43afa34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11293079680, jitterRate=0.051750004291534424}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:28,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 6f92af2c815370ac62101af5d43afa34: 2023-07-12 05:17:28,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16., pid=135, masterSystemTime=1689139048778 2023-07-12 05:17:28,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34., pid=138, masterSystemTime=1689139048780 2023-07-12 05:17:28,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,866 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:28,866 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=6f92af2c815370ac62101af5d43afa34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:28,866 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048866"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139048866"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139048866"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139048866"}]},"ts":"1689139048866"} 2023-07-12 05:17:28,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,867 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:28,867 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ac72c3269fa1b5a76e921940512e5a1a, NAME => 'Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 05:17:28,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:28,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,868 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=2306e4cf4ae9e28f0b8efdcbf67eee16, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:28,869 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139048868"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139048868"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139048868"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139048868"}]},"ts":"1689139048868"} 2023-07-12 05:17:28,870 INFO [StoreOpener-ac72c3269fa1b5a76e921940512e5a1a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,871 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=133 2023-07-12 05:17:28,871 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; OpenRegionProcedure 6f92af2c815370ac62101af5d43afa34, server=jenkins-hbase20.apache.org,44619,1689139024083 in 239 msec 2023-07-12 05:17:28,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=132 2023-07-12 05:17:28,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; OpenRegionProcedure 2306e4cf4ae9e28f0b8efdcbf67eee16, server=jenkins-hbase20.apache.org,46611,1689139023835 in 246 msec 2023-07-12 05:17:28,872 DEBUG [StoreOpener-ac72c3269fa1b5a76e921940512e5a1a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/f 2023-07-12 05:17:28,873 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, ASSIGN in 408 msec 2023-07-12 05:17:28,873 DEBUG [StoreOpener-ac72c3269fa1b5a76e921940512e5a1a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/f 2023-07-12 05:17:28,873 INFO [StoreOpener-ac72c3269fa1b5a76e921940512e5a1a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ac72c3269fa1b5a76e921940512e5a1a columnFamilyName f 2023-07-12 05:17:28,874 INFO [StoreOpener-ac72c3269fa1b5a76e921940512e5a1a-1] regionserver.HStore(310): Store=ac72c3269fa1b5a76e921940512e5a1a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:28,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, ASSIGN in 409 msec 2023-07-12 05:17:28,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:28,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:28,884 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ac72c3269fa1b5a76e921940512e5a1a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11094382720, jitterRate=0.03324490785598755}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:28,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ac72c3269fa1b5a76e921940512e5a1a: 2023-07-12 05:17:28,884 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a., pid=137, masterSystemTime=1689139048778 2023-07-12 05:17:28,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:28,887 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=ac72c3269fa1b5a76e921940512e5a1a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:28,887 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139048887"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139048887"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139048887"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139048887"}]},"ts":"1689139048887"} 2023-07-12 05:17:28,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=130 2023-07-12 05:17:28,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=130, state=SUCCESS; OpenRegionProcedure ac72c3269fa1b5a76e921940512e5a1a, server=jenkins-hbase20.apache.org,46611,1689139023835 in 260 msec 2023-07-12 05:17:28,891 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-12 05:17:28,892 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, ASSIGN in 426 msec 2023-07-12 05:17:28,892 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:28,892 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139048892"}]},"ts":"1689139048892"} 2023-07-12 05:17:28,893 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-12 05:17:28,895 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:28,897 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 982 msec 2023-07-12 05:17:29,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 05:17:29,025 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-12 05:17:29,025 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-12 05:17:29,026 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:29,030 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-12 05:17:29,030 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:29,030 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-12 05:17:29,031 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:29,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 05:17:29,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:29,038 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 05:17:29,039 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-12 05:17:29,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 05:17:29,042 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139049042"}]},"ts":"1689139049042"} 2023-07-12 05:17:29,044 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-12 05:17:29,045 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-12 05:17:29,045 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, UNASSIGN}] 2023-07-12 05:17:29,047 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, UNASSIGN 2023-07-12 05:17:29,047 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, UNASSIGN 2023-07-12 05:17:29,047 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, UNASSIGN 2023-07-12 05:17:29,048 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, UNASSIGN 2023-07-12 05:17:29,048 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, UNASSIGN 2023-07-12 05:17:29,048 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=2306e4cf4ae9e28f0b8efdcbf67eee16, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:29,048 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=6f92af2c815370ac62101af5d43afa34, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:29,048 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=4f872d2cb9f686d856b506bce7c783e2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:29,049 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139049048"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139049048"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139049048"}]},"ts":"1689139049048"} 2023-07-12 05:17:29,049 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139049048"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139049048"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139049048"}]},"ts":"1689139049048"} 2023-07-12 05:17:29,049 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139049048"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139049048"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139049048"}]},"ts":"1689139049048"} 2023-07-12 05:17:29,049 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=ac72c3269fa1b5a76e921940512e5a1a, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:29,049 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=17ed68757f9ad3bcb8e029f38ef97ffa, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:29,049 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139049049"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139049049"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139049049"}]},"ts":"1689139049049"} 2023-07-12 05:17:29,049 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139049049"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139049049"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139049049"}]},"ts":"1689139049049"} 2023-07-12 05:17:29,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=144, state=RUNNABLE; CloseRegionProcedure 6f92af2c815370ac62101af5d43afa34, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:29,051 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=143, state=RUNNABLE; CloseRegionProcedure 2306e4cf4ae9e28f0b8efdcbf67eee16, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:29,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=142, state=RUNNABLE; CloseRegionProcedure 4f872d2cb9f686d856b506bce7c783e2, server=jenkins-hbase20.apache.org,44619,1689139024083}] 2023-07-12 05:17:29,053 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=141, state=RUNNABLE; CloseRegionProcedure ac72c3269fa1b5a76e921940512e5a1a, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:29,054 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 17ed68757f9ad3bcb8e029f38ef97ffa, server=jenkins-hbase20.apache.org,46611,1689139023835}] 2023-07-12 05:17:29,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 05:17:29,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:29,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 6f92af2c815370ac62101af5d43afa34, disabling compactions & flushes 2023-07-12 05:17:29,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:29,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:29,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. after waiting 0 ms 2023-07-12 05:17:29,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:29,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:29,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ac72c3269fa1b5a76e921940512e5a1a, disabling compactions & flushes 2023-07-12 05:17:29,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:29,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:29,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. after waiting 0 ms 2023-07-12 05:17:29,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:29,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:29,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34. 2023-07-12 05:17:29,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 6f92af2c815370ac62101af5d43afa34: 2023-07-12 05:17:29,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:29,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:29,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 4f872d2cb9f686d856b506bce7c783e2, disabling compactions & flushes 2023-07-12 05:17:29,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:29,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:29,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. after waiting 0 ms 2023-07-12 05:17:29,231 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=6f92af2c815370ac62101af5d43afa34, regionState=CLOSED 2023-07-12 05:17:29,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:29,232 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139049231"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139049231"}]},"ts":"1689139049231"} 2023-07-12 05:17:29,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:29,233 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a. 2023-07-12 05:17:29,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ac72c3269fa1b5a76e921940512e5a1a: 2023-07-12 05:17:29,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:29,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:29,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2306e4cf4ae9e28f0b8efdcbf67eee16, disabling compactions & flushes 2023-07-12 05:17:29,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:29,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:29,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. after waiting 0 ms 2023-07-12 05:17:29,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:29,241 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=ac72c3269fa1b5a76e921940512e5a1a, regionState=CLOSED 2023-07-12 05:17:29,241 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139049241"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139049241"}]},"ts":"1689139049241"} 2023-07-12 05:17:29,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=144 2023-07-12 05:17:29,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=144, state=SUCCESS; CloseRegionProcedure 6f92af2c815370ac62101af5d43afa34, server=jenkins-hbase20.apache.org,44619,1689139024083 in 184 msec 2023-07-12 05:17:29,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6f92af2c815370ac62101af5d43afa34, UNASSIGN in 198 msec 2023-07-12 05:17:29,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:29,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=141 2023-07-12 05:17:29,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=141, state=SUCCESS; CloseRegionProcedure ac72c3269fa1b5a76e921940512e5a1a, server=jenkins-hbase20.apache.org,46611,1689139023835 in 190 msec 2023-07-12 05:17:29,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2. 2023-07-12 05:17:29,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 4f872d2cb9f686d856b506bce7c783e2: 2023-07-12 05:17:29,249 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac72c3269fa1b5a76e921940512e5a1a, UNASSIGN in 202 msec 2023-07-12 05:17:29,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:29,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16. 2023-07-12 05:17:29,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2306e4cf4ae9e28f0b8efdcbf67eee16: 2023-07-12 05:17:29,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:29,253 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=4f872d2cb9f686d856b506bce7c783e2, regionState=CLOSED 2023-07-12 05:17:29,253 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139049253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139049253"}]},"ts":"1689139049253"} 2023-07-12 05:17:29,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:29,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:29,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 17ed68757f9ad3bcb8e029f38ef97ffa, disabling compactions & flushes 2023-07-12 05:17:29,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:29,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:29,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. after waiting 0 ms 2023-07-12 05:17:29,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:29,256 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=2306e4cf4ae9e28f0b8efdcbf67eee16, regionState=CLOSED 2023-07-12 05:17:29,256 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689139049256"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139049256"}]},"ts":"1689139049256"} 2023-07-12 05:17:29,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:29,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa. 2023-07-12 05:17:29,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 17ed68757f9ad3bcb8e029f38ef97ffa: 2023-07-12 05:17:29,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:29,264 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=17ed68757f9ad3bcb8e029f38ef97ffa, regionState=CLOSED 2023-07-12 05:17:29,264 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689139049264"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139049264"}]},"ts":"1689139049264"} 2023-07-12 05:17:29,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=142 2023-07-12 05:17:29,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=142, state=SUCCESS; CloseRegionProcedure 4f872d2cb9f686d856b506bce7c783e2, server=jenkins-hbase20.apache.org,44619,1689139024083 in 209 msec 2023-07-12 05:17:29,265 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=143 2023-07-12 05:17:29,265 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; CloseRegionProcedure 2306e4cf4ae9e28f0b8efdcbf67eee16, server=jenkins-hbase20.apache.org,46611,1689139023835 in 211 msec 2023-07-12 05:17:29,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4f872d2cb9f686d856b506bce7c783e2, UNASSIGN in 219 msec 2023-07-12 05:17:29,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2306e4cf4ae9e28f0b8efdcbf67eee16, UNASSIGN in 220 msec 2023-07-12 05:17:29,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-12 05:17:29,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 17ed68757f9ad3bcb8e029f38ef97ffa, server=jenkins-hbase20.apache.org,46611,1689139023835 in 212 msec 2023-07-12 05:17:29,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-12 05:17:29,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ed68757f9ad3bcb8e029f38ef97ffa, UNASSIGN in 222 msec 2023-07-12 05:17:29,269 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139049269"}]},"ts":"1689139049269"} 2023-07-12 05:17:29,270 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-12 05:17:29,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 05:17:29,562 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-12 05:17:29,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 525 msec 2023-07-12 05:17:29,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 05:17:29,646 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-12 05:17:29,646 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,649 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:29,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:29,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-12 05:17:29,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_572562043, current retry=0 2023-07-12 05:17:29,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_572562043. 2023-07-12 05:17:29,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:29,656 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,656 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 05:17:29,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:29,660 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 05:17:29,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-12 05:17:29,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:29,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 148.251.75.209:54108 deadline: 1689139109660, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-12 05:17:29,661 DEBUG [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-12 05:17:29,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testDisabledTableMove 2023-07-12 05:17:29,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,665 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_572562043' 2023-07-12 05:17:29,666 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:29,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:29,672 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:29,672 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:29,672 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:29,672 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:29,672 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:29,675 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/recovered.edits] 2023-07-12 05:17:29,675 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/recovered.edits] 2023-07-12 05:17:29,675 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/recovered.edits] 2023-07-12 05:17:29,675 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/recovered.edits] 2023-07-12 05:17:29,676 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/f, FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/recovered.edits] 2023-07-12 05:17:29,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-12 05:17:29,682 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a/recovered.edits/4.seqid 2023-07-12 05:17:29,684 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/ac72c3269fa1b5a76e921940512e5a1a 2023-07-12 05:17:29,684 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa/recovered.edits/4.seqid 2023-07-12 05:17:29,684 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34/recovered.edits/4.seqid 2023-07-12 05:17:29,685 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16/recovered.edits/4.seqid 2023-07-12 05:17:29,685 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/17ed68757f9ad3bcb8e029f38ef97ffa 2023-07-12 05:17:29,686 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/6f92af2c815370ac62101af5d43afa34 2023-07-12 05:17:29,686 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/2306e4cf4ae9e28f0b8efdcbf67eee16 2023-07-12 05:17:29,686 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/recovered.edits/4.seqid to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/archive/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2/recovered.edits/4.seqid 2023-07-12 05:17:29,686 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/.tmp/data/default/Group_testDisabledTableMove/4f872d2cb9f686d856b506bce7c783e2 2023-07-12 05:17:29,686 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 05:17:29,689 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,691 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-12 05:17:29,702 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-12 05:17:29,703 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,703 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-12 05:17:29,704 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139049704"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:29,704 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139049704"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:29,704 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139049704"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:29,704 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139049704"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:29,704 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139049704"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:29,706 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 05:17:29,706 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ac72c3269fa1b5a76e921940512e5a1a, NAME => 'Group_testDisabledTableMove,,1689139047912.ac72c3269fa1b5a76e921940512e5a1a.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 4f872d2cb9f686d856b506bce7c783e2, NAME => 'Group_testDisabledTableMove,aaaaa,1689139047912.4f872d2cb9f686d856b506bce7c783e2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 2306e4cf4ae9e28f0b8efdcbf67eee16, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689139047912.2306e4cf4ae9e28f0b8efdcbf67eee16.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 6f92af2c815370ac62101af5d43afa34, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689139047912.6f92af2c815370ac62101af5d43afa34.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 17ed68757f9ad3bcb8e029f38ef97ffa, NAME => 'Group_testDisabledTableMove,zzzzz,1689139047912.17ed68757f9ad3bcb8e029f38ef97ffa.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 05:17:29,706 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-12 05:17:29,706 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139049706"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:29,710 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-12 05:17:29,712 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 05:17:29,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 50 msec 2023-07-12 05:17:29,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-12 05:17:29,779 INFO [Listener at localhost.localdomain/33317] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-12 05:17:29,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:29,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:29,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:29,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:29,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:29,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:29,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:29,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:29,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:29,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:29,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:29,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:35711] to rsgroup default 2023-07-12 05:17:29,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_572562043, current retry=0 2023-07-12 05:17:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,35711,1689139024278, jenkins-hbase20.apache.org,38695,1689139027905] are moved back to Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_572562043 => default 2023-07-12 05:17:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:29,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testDisabledTableMove_572562043 2023-07-12 05:17:29,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:29,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:29,805 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:29,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:29,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:29,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:29,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:29,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:29,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:29,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140249820, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:29,821 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:29,822 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:29,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,823 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:29,824 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:29,824 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:29,841 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=505 (was 502) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-26352938_17 at /127.0.0.1:58030 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x326ecb32-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1536315787_17 at /127.0.0.1:54094 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x61c28258-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=781 (was 757) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=525 (was 527), ProcessCount=170 (was 167) - ProcessCount LEAK? -, AvailableMemoryMB=3292 (was 3633) 2023-07-12 05:17:29,841 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 05:17:29,857 INFO [Listener at localhost.localdomain/33317] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=505, OpenFileDescriptor=781, MaxFileDescriptor=60000, SystemLoadAverage=525, ProcessCount=170, AvailableMemoryMB=3290 2023-07-12 05:17:29,857 WARN [Listener at localhost.localdomain/33317] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 05:17:29,857 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-12 05:17:29,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:29,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:29,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:29,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:29,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:29,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:29,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:29,872 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:29,875 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:29,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:29,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:29,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:29,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:29,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:29,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41085] to rsgroup master 2023-07-12 05:17:29,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:29,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:54108 deadline: 1689140249906, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. 2023-07-12 05:17:29,908 WARN [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41085 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:29,910 INFO [Listener at localhost.localdomain/33317] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:29,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:29,911 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:29,911 INFO [Listener at localhost.localdomain/33317] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35711, jenkins-hbase20.apache.org:38695, jenkins-hbase20.apache.org:44619, jenkins-hbase20.apache.org:46611], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41085] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:29,912 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 05:17:29,912 INFO [Listener at localhost.localdomain/33317] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 05:17:29,913 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x41587bac to 127.0.0.1:62508 2023-07-12 05:17:29,913 DEBUG [Listener at localhost.localdomain/33317] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:29,916 DEBUG [Listener at localhost.localdomain/33317] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 05:17:29,916 DEBUG [Listener at localhost.localdomain/33317] util.JVMClusterUtil(257): Found active master hash=1205004407, stopped=false 2023-07-12 05:17:29,916 DEBUG [Listener at localhost.localdomain/33317] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 05:17:29,917 DEBUG [Listener at localhost.localdomain/33317] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 05:17:29,917 INFO [Listener at localhost.localdomain/33317] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:29,918 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:29,918 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:29,918 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:29,918 INFO [Listener at localhost.localdomain/33317] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 05:17:29,918 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:29,918 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:29,918 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:29,918 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:29,918 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:29,919 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:29,919 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:29,919 DEBUG [Listener at localhost.localdomain/33317] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x35c80c30 to 127.0.0.1:62508 2023-07-12 05:17:29,919 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,38695,1689139027905' ***** 2023-07-12 05:17:29,920 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-12 05:17:29,919 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:29,920 DEBUG [Listener at localhost.localdomain/33317] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:29,919 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1064): Closing user regions 2023-07-12 05:17:29,920 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(3305): Received CLOSE for 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:29,920 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,35711,1689139024278' ***** 2023-07-12 05:17:29,921 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-12 05:17:29,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 797b32715a69cd102e216d93a59580cb, disabling compactions & flushes 2023-07-12 05:17:29,922 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:29,922 INFO [Listener at localhost.localdomain/33317] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,46611,1689139023835' ***** 2023-07-12 05:17:29,935 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:29,935 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1064): Closing user regions 2023-07-12 05:17:29,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:29,937 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(3305): Received CLOSE for 65a59a940eb599446f9a504f8dbf75d7 2023-07-12 05:17:29,938 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(3305): Received CLOSE for e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:29,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 65a59a940eb599446f9a504f8dbf75d7, disabling compactions & flushes 2023-07-12 05:17:29,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:29,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:29,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. after waiting 0 ms 2023-07-12 05:17:29,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:29,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 65a59a940eb599446f9a504f8dbf75d7 1/1 column families, dataSize=22.38 KB heapSize=36.84 KB 2023-07-12 05:17:29,936 INFO [Listener at localhost.localdomain/33317] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:29,939 INFO [Listener at localhost.localdomain/33317] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44619,1689139024083' ***** 2023-07-12 05:17:29,939 INFO [Listener at localhost.localdomain/33317] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:29,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:29,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. after waiting 0 ms 2023-07-12 05:17:29,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:29,938 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(3305): Received CLOSE for aa9de83082eb73885ee3fc61a2c971d8 2023-07-12 05:17:29,946 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:29,948 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:29,948 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:29,948 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:29,951 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:29,951 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:29,952 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:29,952 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:29,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/testRename/797b32715a69cd102e216d93a59580cb/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 05:17:29,959 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:29,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:29,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 797b32715a69cd102e216d93a59580cb: 2023-07-12 05:17:29,969 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41085] assignment.AssignmentManager(1092): RegionServer CLOSED 797b32715a69cd102e216d93a59580cb 2023-07-12 05:17:29,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689139042186.797b32715a69cd102e216d93a59580cb. 2023-07-12 05:17:29,996 INFO [RS:3;jenkins-hbase20:38695] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@8fffd09{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:29,998 INFO [RS:1;jenkins-hbase20:44619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2be25988{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:29,999 INFO [RS:0;jenkins-hbase20:46611] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@354b4393{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:30,005 INFO [RS:2;jenkins-hbase20:35711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40d19b62{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:30,005 INFO [RS:3;jenkins-hbase20:38695] server.AbstractConnector(383): Stopped ServerConnector@7b5edea0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:30,005 INFO [RS:1;jenkins-hbase20:44619] server.AbstractConnector(383): Stopped ServerConnector@3702ea50{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:30,005 INFO [RS:2;jenkins-hbase20:35711] server.AbstractConnector(383): Stopped ServerConnector@1fdbb67e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:30,005 INFO [RS:0;jenkins-hbase20:46611] server.AbstractConnector(383): Stopped ServerConnector@62761c88{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:30,005 INFO [RS:2;jenkins-hbase20:35711] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:30,005 INFO [RS:1;jenkins-hbase20:44619] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:30,005 INFO [RS:3;jenkins-hbase20:38695] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:30,005 INFO [RS:0;jenkins-hbase20:46611] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:30,007 INFO [RS:2;jenkins-hbase20:35711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6687e8e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:30,007 INFO [RS:1;jenkins-hbase20:44619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@62322ad2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:30,009 INFO [RS:0;jenkins-hbase20:46611] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@588dc7af{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:30,009 INFO [RS:2;jenkins-hbase20:35711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@546de504{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:30,011 INFO [RS:1;jenkins-hbase20:44619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@792df3d0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:30,028 INFO [RS:3;jenkins-hbase20:38695] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a356b4f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:30,028 INFO [RS:0;jenkins-hbase20:46611] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3eb786a7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:30,028 INFO [RS:2;jenkins-hbase20:35711] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:30,029 INFO [RS:2;jenkins-hbase20:35711] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:30,029 INFO [RS:2;jenkins-hbase20:35711] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:30,029 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:30,031 DEBUG [RS:2;jenkins-hbase20:35711] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x24f517be to 127.0.0.1:62508 2023-07-12 05:17:30,031 DEBUG [RS:2;jenkins-hbase20:35711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,031 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,35711,1689139024278; all regions closed. 2023-07-12 05:17:30,030 INFO [RS:3;jenkins-hbase20:38695] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b09697f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:30,030 INFO [RS:1;jenkins-hbase20:44619] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:30,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.38 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/.tmp/m/d3616a2b6c46481f8d53dccf5313aa6f 2023-07-12 05:17:30,036 INFO [RS:1;jenkins-hbase20:44619] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:30,036 INFO [RS:3;jenkins-hbase20:38695] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:30,036 INFO [RS:0;jenkins-hbase20:46611] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:30,036 INFO [RS:3;jenkins-hbase20:38695] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:30,036 INFO [RS:3;jenkins-hbase20:38695] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:30,037 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:30,037 DEBUG [RS:3;jenkins-hbase20:38695] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2aff626f to 127.0.0.1:62508 2023-07-12 05:17:30,037 DEBUG [RS:3;jenkins-hbase20:38695] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,037 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38695,1689139027905; all regions closed. 2023-07-12 05:17:30,036 INFO [RS:0;jenkins-hbase20:46611] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:30,037 INFO [RS:0;jenkins-hbase20:46611] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:30,037 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(3307): Received CLOSE for the region: e3499c438782e9645a7a2e6435450c64, which we are already trying to CLOSE, but not completed yet 2023-07-12 05:17:30,037 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(3307): Received CLOSE for the region: aa9de83082eb73885ee3fc61a2c971d8, which we are already trying to CLOSE, but not completed yet 2023-07-12 05:17:30,037 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:30,037 DEBUG [RS:0;jenkins-hbase20:46611] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b5e8d80 to 127.0.0.1:62508 2023-07-12 05:17:30,037 DEBUG [RS:0;jenkins-hbase20:46611] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,037 INFO [RS:0;jenkins-hbase20:46611] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:30,038 INFO [RS:0;jenkins-hbase20:46611] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:30,038 INFO [RS:0;jenkins-hbase20:46611] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:30,038 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 05:17:30,036 INFO [RS:1;jenkins-hbase20:44619] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:30,038 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:30,038 DEBUG [RS:1;jenkins-hbase20:44619] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4a5c4ca2 to 127.0.0.1:62508 2023-07-12 05:17:30,038 DEBUG [RS:1;jenkins-hbase20:44619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,038 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44619,1689139024083; all regions closed. 2023-07-12 05:17:30,046 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 05:17:30,047 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1478): Online Regions={65a59a940eb599446f9a504f8dbf75d7=hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7., e3499c438782e9645a7a2e6435450c64=unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64., 1588230740=hbase:meta,,1.1588230740, aa9de83082eb73885ee3fc61a2c971d8=hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8.} 2023-07-12 05:17:30,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 05:17:30,047 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 05:17:30,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 05:17:30,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 05:17:30,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 05:17:30,053 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:30,054 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=79.51 KB heapSize=125.46 KB 2023-07-12 05:17:30,054 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1504): Waiting on 1588230740, 65a59a940eb599446f9a504f8dbf75d7, aa9de83082eb73885ee3fc61a2c971d8, e3499c438782e9645a7a2e6435450c64 2023-07-12 05:17:30,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d3616a2b6c46481f8d53dccf5313aa6f 2023-07-12 05:17:30,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/.tmp/m/d3616a2b6c46481f8d53dccf5313aa6f as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/d3616a2b6c46481f8d53dccf5313aa6f 2023-07-12 05:17:30,070 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 05:17:30,071 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 05:17:30,071 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 05:17:30,071 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 05:17:30,072 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 05:17:30,073 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 05:17:30,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d3616a2b6c46481f8d53dccf5313aa6f 2023-07-12 05:17:30,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/m/d3616a2b6c46481f8d53dccf5313aa6f, entries=22, sequenceid=107, filesize=5.9 K 2023-07-12 05:17:30,098 DEBUG [RS:2;jenkins-hbase20:35711] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs 2023-07-12 05:17:30,098 INFO [RS:2;jenkins-hbase20:35711] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C35711%2C1689139024278:(num 1689139026249) 2023-07-12 05:17:30,098 DEBUG [RS:2;jenkins-hbase20:35711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,098 INFO [RS:2;jenkins-hbase20:35711] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:30,098 DEBUG [RS:3;jenkins-hbase20:38695] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs 2023-07-12 05:17:30,099 INFO [RS:3;jenkins-hbase20:38695] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C38695%2C1689139027905:(num 1689139028337) 2023-07-12 05:17:30,099 DEBUG [RS:3;jenkins-hbase20:38695] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,099 INFO [RS:3;jenkins-hbase20:38695] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:30,099 INFO [RS:2;jenkins-hbase20:35711] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:30,099 INFO [RS:2;jenkins-hbase20:35711] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:30,099 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:30,099 INFO [RS:2;jenkins-hbase20:35711] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:30,099 INFO [RS:2;jenkins-hbase20:35711] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:30,099 INFO [RS:3;jenkins-hbase20:38695] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:30,101 INFO [RS:3;jenkins-hbase20:38695] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:30,101 INFO [RS:3;jenkins-hbase20:38695] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:30,101 INFO [RS:3;jenkins-hbase20:38695] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:30,102 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:30,107 INFO [RS:3;jenkins-hbase20:38695] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38695 2023-07-12 05:17:30,108 INFO [RS:2;jenkins-hbase20:35711] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:35711 2023-07-12 05:17:30,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.38 KB/22920, heapSize ~36.82 KB/37704, currentSize=0 B/0 for 65a59a940eb599446f9a504f8dbf75d7 in 177ms, sequenceid=107, compaction requested=true 2023-07-12 05:17:30,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 05:17:30,116 DEBUG [RS:1;jenkins-hbase20:44619] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs 2023-07-12 05:17:30,117 INFO [RS:1;jenkins-hbase20:44619] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44619%2C1689139024083:(num 1689139026256) 2023-07-12 05:17:30,117 DEBUG [RS:1;jenkins-hbase20:44619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,117 INFO [RS:1;jenkins-hbase20:44619] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:30,120 INFO [RS:1;jenkins-hbase20:44619] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:30,120 INFO [RS:1;jenkins-hbase20:44619] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:30,120 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:30,120 INFO [RS:1;jenkins-hbase20:44619] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:30,121 INFO [RS:1;jenkins-hbase20:44619] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:30,122 INFO [RS:1;jenkins-hbase20:44619] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44619 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:30,147 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:30,148 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44619,1689139024083 2023-07-12 05:17:30,148 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:30,148 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:30,148 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35711,1689139024278 2023-07-12 05:17:30,148 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38695,1689139027905 2023-07-12 05:17:30,148 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38695,1689139027905] 2023-07-12 05:17:30,148 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38695,1689139027905; numProcessing=1 2023-07-12 05:17:30,149 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38695,1689139027905 already deleted, retry=false 2023-07-12 05:17:30,150 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38695,1689139027905 expired; onlineServers=3 2023-07-12 05:17:30,150 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44619,1689139024083] 2023-07-12 05:17:30,150 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44619,1689139024083; numProcessing=2 2023-07-12 05:17:30,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/rsgroup/65a59a940eb599446f9a504f8dbf75d7/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-12 05:17:30,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:30,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:30,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 65a59a940eb599446f9a504f8dbf75d7: 2023-07-12 05:17:30,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689139026919.65a59a940eb599446f9a504f8dbf75d7. 2023-07-12 05:17:30,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e3499c438782e9645a7a2e6435450c64, disabling compactions & flushes 2023-07-12 05:17:30,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:30,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:30,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. after waiting 0 ms 2023-07-12 05:17:30,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:30,170 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.52 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/.tmp/info/08e18984b3ba4a72a9d5d5cb9e4a1bc3 2023-07-12 05:17:30,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/default/unmovedTable/e3499c438782e9645a7a2e6435450c64/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 05:17:30,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 08e18984b3ba4a72a9d5d5cb9e4a1bc3 2023-07-12 05:17:30,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:30,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e3499c438782e9645a7a2e6435450c64: 2023-07-12 05:17:30,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689139043856.e3499c438782e9645a7a2e6435450c64. 2023-07-12 05:17:30,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing aa9de83082eb73885ee3fc61a2c971d8, disabling compactions & flushes 2023-07-12 05:17:30,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:30,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:30,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. after waiting 0 ms 2023-07-12 05:17:30,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:30,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing aa9de83082eb73885ee3fc61a2c971d8 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 05:17:30,217 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/.tmp/rep_barrier/057e5ec590c240e4824248d9c18a5c87 2023-07-12 05:17:30,218 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/.tmp/info/5bad854d89a8421e86051e39a117c92e 2023-07-12 05:17:30,223 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 057e5ec590c240e4824248d9c18a5c87 2023-07-12 05:17:30,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/.tmp/info/5bad854d89a8421e86051e39a117c92e as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/info/5bad854d89a8421e86051e39a117c92e 2023-07-12 05:17:30,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/info/5bad854d89a8421e86051e39a117c92e, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 05:17:30,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for aa9de83082eb73885ee3fc61a2c971d8 in 53ms, sequenceid=6, compaction requested=false 2023-07-12 05:17:30,245 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/.tmp/table/97e206302d6d4a3da44fc6d680702a81 2023-07-12 05:17:30,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/namespace/aa9de83082eb73885ee3fc61a2c971d8/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 05:17:30,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:30,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for aa9de83082eb73885ee3fc61a2c971d8: 2023-07-12 05:17:30,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689139026854.aa9de83082eb73885ee3fc61a2c971d8. 2023-07-12 05:17:30,249 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,249 INFO [RS:2;jenkins-hbase20:35711] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,35711,1689139024278; zookeeper connection closed. 2023-07-12 05:17:30,249 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:35711-0x1007f9c80ff0003, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,250 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44619,1689139024083 already deleted, retry=false 2023-07-12 05:17:30,250 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44619,1689139024083 expired; onlineServers=2 2023-07-12 05:17:30,250 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,35711,1689139024278] 2023-07-12 05:17:30,250 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,35711,1689139024278; numProcessing=3 2023-07-12 05:17:30,252 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 97e206302d6d4a3da44fc6d680702a81 2023-07-12 05:17:30,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/.tmp/info/08e18984b3ba4a72a9d5d5cb9e4a1bc3 as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/info/08e18984b3ba4a72a9d5d5cb9e4a1bc3 2023-07-12 05:17:30,254 DEBUG [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 05:17:30,255 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4200c6d6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4200c6d6 2023-07-12 05:17:30,261 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 08e18984b3ba4a72a9d5d5cb9e4a1bc3 2023-07-12 05:17:30,261 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/info/08e18984b3ba4a72a9d5d5cb9e4a1bc3, entries=100, sequenceid=204, filesize=16.3 K 2023-07-12 05:17:30,262 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/.tmp/rep_barrier/057e5ec590c240e4824248d9c18a5c87 as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/rep_barrier/057e5ec590c240e4824248d9c18a5c87 2023-07-12 05:17:30,269 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 057e5ec590c240e4824248d9c18a5c87 2023-07-12 05:17:30,270 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/rep_barrier/057e5ec590c240e4824248d9c18a5c87, entries=18, sequenceid=204, filesize=6.9 K 2023-07-12 05:17:30,271 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/.tmp/table/97e206302d6d4a3da44fc6d680702a81 as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/table/97e206302d6d4a3da44fc6d680702a81 2023-07-12 05:17:30,279 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 97e206302d6d4a3da44fc6d680702a81 2023-07-12 05:17:30,279 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/table/97e206302d6d4a3da44fc6d680702a81, entries=31, sequenceid=204, filesize=7.4 K 2023-07-12 05:17:30,280 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~79.51 KB/81416, heapSize ~125.41 KB/128424, currentSize=0 B/0 for 1588230740 in 233ms, sequenceid=204, compaction requested=false 2023-07-12 05:17:30,293 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/data/hbase/meta/1588230740/recovered.edits/207.seqid, newMaxSeqId=207, maxSeqId=1 2023-07-12 05:17:30,294 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:30,295 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:30,295 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 05:17:30,295 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:30,349 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,349 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:44619-0x1007f9c80ff0002, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,349 INFO [RS:1;jenkins-hbase20:44619] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44619,1689139024083; zookeeper connection closed. 2023-07-12 05:17:30,350 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2e85521] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2e85521 2023-07-12 05:17:30,350 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,35711,1689139024278 already deleted, retry=false 2023-07-12 05:17:30,350 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,35711,1689139024278 expired; onlineServers=1 2023-07-12 05:17:30,455 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46611,1689139023835; all regions closed. 2023-07-12 05:17:30,464 DEBUG [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs 2023-07-12 05:17:30,464 INFO [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46611%2C1689139023835.meta:.meta(num 1689139026546) 2023-07-12 05:17:30,472 DEBUG [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/oldWALs 2023-07-12 05:17:30,472 INFO [RS:0;jenkins-hbase20:46611] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46611%2C1689139023835:(num 1689139026255) 2023-07-12 05:17:30,472 DEBUG [RS:0;jenkins-hbase20:46611] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,472 INFO [RS:0;jenkins-hbase20:46611] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:30,472 INFO [RS:0;jenkins-hbase20:46611] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:30,473 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:30,474 INFO [RS:0;jenkins-hbase20:46611] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46611 2023-07-12 05:17:30,475 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46611,1689139023835 2023-07-12 05:17:30,475 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:30,476 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,46611,1689139023835] 2023-07-12 05:17:30,476 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,46611,1689139023835; numProcessing=4 2023-07-12 05:17:30,518 INFO [RS:3;jenkins-hbase20:38695] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38695,1689139027905; zookeeper connection closed. 2023-07-12 05:17:30,518 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,518 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:38695-0x1007f9c80ff000b, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,519 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@76293ba9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@76293ba9 2023-07-12 05:17:30,576 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,576 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): regionserver:46611-0x1007f9c80ff0001, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,576 INFO [RS:0;jenkins-hbase20:46611] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46611,1689139023835; zookeeper connection closed. 2023-07-12 05:17:30,576 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4f64b65d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4f64b65d 2023-07-12 05:17:30,577 INFO [Listener at localhost.localdomain/33317] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 05:17:30,577 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,46611,1689139023835 already deleted, retry=false 2023-07-12 05:17:30,577 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,46611,1689139023835 expired; onlineServers=0 2023-07-12 05:17:30,577 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,41085,1689139021900' ***** 2023-07-12 05:17:30,577 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 05:17:30,578 DEBUG [M:0;jenkins-hbase20:41085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6efe24c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:30,578 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:30,582 INFO [M:0;jenkins-hbase20:41085] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@29a0d1a2{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 05:17:30,582 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:30,582 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:30,583 INFO [M:0;jenkins-hbase20:41085] server.AbstractConnector(383): Stopped ServerConnector@7991162b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:30,583 INFO [M:0;jenkins-hbase20:41085] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:30,583 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:30,584 INFO [M:0;jenkins-hbase20:41085] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35456473{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:30,584 INFO [M:0;jenkins-hbase20:41085] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70bfe8f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:30,585 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,41085,1689139021900 2023-07-12 05:17:30,585 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,41085,1689139021900; all regions closed. 2023-07-12 05:17:30,585 DEBUG [M:0;jenkins-hbase20:41085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:30,585 INFO [M:0;jenkins-hbase20:41085] master.HMaster(1491): Stopping master jetty server 2023-07-12 05:17:30,586 INFO [M:0;jenkins-hbase20:41085] server.AbstractConnector(383): Stopped ServerConnector@19a15b5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:30,589 DEBUG [M:0;jenkins-hbase20:41085] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 05:17:30,589 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 05:17:30,589 DEBUG [M:0;jenkins-hbase20:41085] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 05:17:30,589 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139025823] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139025823,5,FailOnTimeoutGroup] 2023-07-12 05:17:30,589 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139025824] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139025824,5,FailOnTimeoutGroup] 2023-07-12 05:17:30,590 INFO [M:0;jenkins-hbase20:41085] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 05:17:30,590 INFO [M:0;jenkins-hbase20:41085] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 05:17:30,592 INFO [M:0;jenkins-hbase20:41085] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-12 05:17:30,592 DEBUG [M:0;jenkins-hbase20:41085] master.HMaster(1512): Stopping service threads 2023-07-12 05:17:30,592 INFO [M:0;jenkins-hbase20:41085] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 05:17:30,593 ERROR [M:0;jenkins-hbase20:41085] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 05:17:30,593 INFO [M:0;jenkins-hbase20:41085] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 05:17:30,593 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 05:17:30,596 DEBUG [M:0;jenkins-hbase20:41085] zookeeper.ZKUtil(398): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 05:17:30,596 WARN [M:0;jenkins-hbase20:41085] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 05:17:30,597 INFO [M:0;jenkins-hbase20:41085] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 05:17:30,597 INFO [M:0;jenkins-hbase20:41085] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 05:17:30,597 DEBUG [M:0;jenkins-hbase20:41085] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 05:17:30,597 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:30,597 DEBUG [M:0;jenkins-hbase20:41085] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:30,597 DEBUG [M:0;jenkins-hbase20:41085] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 05:17:30,597 DEBUG [M:0;jenkins-hbase20:41085] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:30,598 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=510.69 KB heapSize=610.95 KB 2023-07-12 05:17:30,637 INFO [M:0;jenkins-hbase20:41085] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=510.69 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7df1a0fc442545efb96ae09cde034ac1 2023-07-12 05:17:30,648 DEBUG [M:0;jenkins-hbase20:41085] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7df1a0fc442545efb96ae09cde034ac1 as hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7df1a0fc442545efb96ae09cde034ac1 2023-07-12 05:17:30,655 INFO [M:0;jenkins-hbase20:41085] regionserver.HStore(1080): Added hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7df1a0fc442545efb96ae09cde034ac1, entries=151, sequenceid=1128, filesize=26.7 K 2023-07-12 05:17:30,656 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegion(2948): Finished flush of dataSize ~510.69 KB/522947, heapSize ~610.93 KB/625592, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 59ms, sequenceid=1128, compaction requested=false 2023-07-12 05:17:30,658 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:30,658 DEBUG [M:0;jenkins-hbase20:41085] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:30,666 INFO [M:0;jenkins-hbase20:41085] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 05:17:30,666 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:30,667 INFO [M:0;jenkins-hbase20:41085] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:41085 2023-07-12 05:17:30,668 DEBUG [M:0;jenkins-hbase20:41085] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,41085,1689139021900 already deleted, retry=false 2023-07-12 05:17:30,770 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,770 INFO [M:0;jenkins-hbase20:41085] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,41085,1689139021900; zookeeper connection closed. 2023-07-12 05:17:30,770 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): master:41085-0x1007f9c80ff0000, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:30,772 WARN [Listener at localhost.localdomain/33317] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:30,784 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:30,892 WARN [BP-406622829-148.251.75.209-1689139018115 heartbeating to localhost.localdomain/127.0.0.1:35039] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:30,892 WARN [BP-406622829-148.251.75.209-1689139018115 heartbeating to localhost.localdomain/127.0.0.1:35039] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-406622829-148.251.75.209-1689139018115 (Datanode Uuid bc697de6-8040-4b63-aa70-c7775bd0a646) service to localhost.localdomain/127.0.0.1:35039 2023-07-12 05:17:30,894 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data5/current/BP-406622829-148.251.75.209-1689139018115] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:30,894 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data6/current/BP-406622829-148.251.75.209-1689139018115] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:30,898 WARN [Listener at localhost.localdomain/33317] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:30,904 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:31,010 WARN [BP-406622829-148.251.75.209-1689139018115 heartbeating to localhost.localdomain/127.0.0.1:35039] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:31,010 WARN [BP-406622829-148.251.75.209-1689139018115 heartbeating to localhost.localdomain/127.0.0.1:35039] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-406622829-148.251.75.209-1689139018115 (Datanode Uuid aa67dd5b-48c8-44ab-a821-ff1add2bb0a9) service to localhost.localdomain/127.0.0.1:35039 2023-07-12 05:17:31,011 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data3/current/BP-406622829-148.251.75.209-1689139018115] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:31,012 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data4/current/BP-406622829-148.251.75.209-1689139018115] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:31,014 WARN [Listener at localhost.localdomain/33317] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:31,024 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:31,127 WARN [BP-406622829-148.251.75.209-1689139018115 heartbeating to localhost.localdomain/127.0.0.1:35039] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:31,127 WARN [BP-406622829-148.251.75.209-1689139018115 heartbeating to localhost.localdomain/127.0.0.1:35039] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-406622829-148.251.75.209-1689139018115 (Datanode Uuid 2974569b-c7ac-4e48-bbf0-845a322afa24) service to localhost.localdomain/127.0.0.1:35039 2023-07-12 05:17:31,127 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data1/current/BP-406622829-148.251.75.209-1689139018115] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:31,128 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/cluster_845a661f-7117-590a-c450-54338bd84e90/dfs/data/data2/current/BP-406622829-148.251.75.209-1689139018115] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:31,156 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-12 05:17:31,282 INFO [Listener at localhost.localdomain/33317] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 05:17:31,371 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 05:17:31,371 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 05:17:31,372 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.log.dir so I do NOT create it in target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160 2023-07-12 05:17:31,372 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c71930b7-5bad-e919-1286-77c10ed717f9/hadoop.tmp.dir so I do NOT create it in target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160 2023-07-12 05:17:31,372 INFO [Listener at localhost.localdomain/33317] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339, deleteOnExit=true 2023-07-12 05:17:31,372 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 05:17:31,372 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/test.cache.data in system properties and HBase conf 2023-07-12 05:17:31,372 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 05:17:31,373 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir in system properties and HBase conf 2023-07-12 05:17:31,373 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 05:17:31,373 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 05:17:31,373 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 05:17:31,373 DEBUG [Listener at localhost.localdomain/33317] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 05:17:31,374 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 05:17:31,374 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 05:17:31,374 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 05:17:31,374 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 05:17:31,374 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 05:17:31,374 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 05:17:31,375 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 05:17:31,375 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 05:17:31,375 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 05:17:31,375 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/nfs.dump.dir in system properties and HBase conf 2023-07-12 05:17:31,375 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir in system properties and HBase conf 2023-07-12 05:17:31,375 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 05:17:31,376 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 05:17:31,376 INFO [Listener at localhost.localdomain/33317] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 05:17:31,377 DEBUG [Listener at localhost.localdomain/33317-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1007f9c80ff000a, quorum=127.0.0.1:62508, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 05:17:31,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1007f9c80ff000a, quorum=127.0.0.1:62508, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 05:17:31,379 WARN [Listener at localhost.localdomain/33317] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 05:17:31,380 WARN [Listener at localhost.localdomain/33317] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 05:17:31,448 WARN [Listener at localhost.localdomain/33317] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:31,452 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:31,468 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/Jetty_localhost_localdomain_42651_hdfs____.vyph5e/webapp 2023-07-12 05:17:31,573 INFO [Listener at localhost.localdomain/33317] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42651 2023-07-12 05:17:31,576 WARN [Listener at localhost.localdomain/33317] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 05:17:31,576 WARN [Listener at localhost.localdomain/33317] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 05:17:31,654 WARN [Listener at localhost.localdomain/36357] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:31,708 WARN [Listener at localhost.localdomain/36357] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:31,722 WARN [Listener at localhost.localdomain/36357] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:31,724 INFO [Listener at localhost.localdomain/36357] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:31,731 INFO [Listener at localhost.localdomain/36357] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/Jetty_localhost_40327_datanode____.xs75ul/webapp 2023-07-12 05:17:31,845 INFO [Listener at localhost.localdomain/36357] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40327 2023-07-12 05:17:31,854 WARN [Listener at localhost.localdomain/45279] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:31,912 WARN [Listener at localhost.localdomain/45279] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:31,915 WARN [Listener at localhost.localdomain/45279] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:31,917 INFO [Listener at localhost.localdomain/45279] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:31,921 INFO [Listener at localhost.localdomain/45279] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/Jetty_localhost_36201_datanode____.ew0skp/webapp 2023-07-12 05:17:31,972 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd4617053a9902178: Processing first storage report for DS-e40953bd-633e-436c-ad31-337296b3d648 from datanode 4f625e4c-df9b-4fad-aee7-0e30eb33bc52 2023-07-12 05:17:31,973 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd4617053a9902178: from storage DS-e40953bd-633e-436c-ad31-337296b3d648 node DatanodeRegistration(127.0.0.1:38109, datanodeUuid=4f625e4c-df9b-4fad-aee7-0e30eb33bc52, infoPort=43839, infoSecurePort=0, ipcPort=45279, storageInfo=lv=-57;cid=testClusterID;nsid=598824885;c=1689139051382), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:31,973 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd4617053a9902178: Processing first storage report for DS-6d8e36fb-ae85-44b0-b437-c569bd443159 from datanode 4f625e4c-df9b-4fad-aee7-0e30eb33bc52 2023-07-12 05:17:31,973 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd4617053a9902178: from storage DS-6d8e36fb-ae85-44b0-b437-c569bd443159 node DatanodeRegistration(127.0.0.1:38109, datanodeUuid=4f625e4c-df9b-4fad-aee7-0e30eb33bc52, infoPort=43839, infoSecurePort=0, ipcPort=45279, storageInfo=lv=-57;cid=testClusterID;nsid=598824885;c=1689139051382), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:32,017 INFO [Listener at localhost.localdomain/45279] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36201 2023-07-12 05:17:32,026 WARN [Listener at localhost.localdomain/36259] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:32,044 WARN [Listener at localhost.localdomain/36259] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:32,047 WARN [Listener at localhost.localdomain/36259] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:32,048 INFO [Listener at localhost.localdomain/36259] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:32,057 INFO [Listener at localhost.localdomain/36259] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/Jetty_localhost_45311_datanode____wi81g9/webapp 2023-07-12 05:17:32,095 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:32,095 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 05:17:32,095 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 05:17:32,135 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e6886a71f01e04f: Processing first storage report for DS-54ff7283-56f4-4d08-88ce-7e645a3a2740 from datanode a4b3e39d-c69c-40f2-a5cb-1a5e79ffe697 2023-07-12 05:17:32,135 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e6886a71f01e04f: from storage DS-54ff7283-56f4-4d08-88ce-7e645a3a2740 node DatanodeRegistration(127.0.0.1:43295, datanodeUuid=a4b3e39d-c69c-40f2-a5cb-1a5e79ffe697, infoPort=36933, infoSecurePort=0, ipcPort=36259, storageInfo=lv=-57;cid=testClusterID;nsid=598824885;c=1689139051382), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:32,135 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e6886a71f01e04f: Processing first storage report for DS-150a96b9-dd81-4840-a843-13b8b85eecf8 from datanode a4b3e39d-c69c-40f2-a5cb-1a5e79ffe697 2023-07-12 05:17:32,135 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e6886a71f01e04f: from storage DS-150a96b9-dd81-4840-a843-13b8b85eecf8 node DatanodeRegistration(127.0.0.1:43295, datanodeUuid=a4b3e39d-c69c-40f2-a5cb-1a5e79ffe697, infoPort=36933, infoSecurePort=0, ipcPort=36259, storageInfo=lv=-57;cid=testClusterID;nsid=598824885;c=1689139051382), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:32,158 INFO [Listener at localhost.localdomain/36259] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45311 2023-07-12 05:17:32,170 WARN [Listener at localhost.localdomain/42409] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:32,266 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6f2cec7e320e1fed: Processing first storage report for DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b from datanode 0444b1ce-b363-4678-b21f-05de9d58a996 2023-07-12 05:17:32,266 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6f2cec7e320e1fed: from storage DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b node DatanodeRegistration(127.0.0.1:36135, datanodeUuid=0444b1ce-b363-4678-b21f-05de9d58a996, infoPort=46569, infoSecurePort=0, ipcPort=42409, storageInfo=lv=-57;cid=testClusterID;nsid=598824885;c=1689139051382), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:32,266 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6f2cec7e320e1fed: Processing first storage report for DS-ec7f1f03-4212-44b3-9c48-305b08ea0237 from datanode 0444b1ce-b363-4678-b21f-05de9d58a996 2023-07-12 05:17:32,266 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6f2cec7e320e1fed: from storage DS-ec7f1f03-4212-44b3-9c48-305b08ea0237 node DatanodeRegistration(127.0.0.1:36135, datanodeUuid=0444b1ce-b363-4678-b21f-05de9d58a996, infoPort=46569, infoSecurePort=0, ipcPort=42409, storageInfo=lv=-57;cid=testClusterID;nsid=598824885;c=1689139051382), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:32,292 DEBUG [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160 2023-07-12 05:17:32,308 INFO [Listener at localhost.localdomain/42409] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/zookeeper_0, clientPort=63349, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 05:17:32,311 INFO [Listener at localhost.localdomain/42409] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63349 2023-07-12 05:17:32,311 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,312 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,346 INFO [Listener at localhost.localdomain/42409] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca with version=8 2023-07-12 05:17:32,346 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/hbase-staging 2023-07-12 05:17:32,347 DEBUG [Listener at localhost.localdomain/42409] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 05:17:32,347 DEBUG [Listener at localhost.localdomain/42409] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 05:17:32,347 DEBUG [Listener at localhost.localdomain/42409] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 05:17:32,347 DEBUG [Listener at localhost.localdomain/42409] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 05:17:32,348 INFO [Listener at localhost.localdomain/42409] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:32,349 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,349 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,349 INFO [Listener at localhost.localdomain/42409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:32,349 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,349 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:32,349 INFO [Listener at localhost.localdomain/42409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:32,350 INFO [Listener at localhost.localdomain/42409] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44483 2023-07-12 05:17:32,351 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,352 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,353 INFO [Listener at localhost.localdomain/42409] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44483 connecting to ZooKeeper ensemble=127.0.0.1:63349 2023-07-12 05:17:32,361 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:444830x0, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:32,363 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44483-0x1007f9cfb890000 connected 2023-07-12 05:17:32,391 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:32,391 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:32,392 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:32,398 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44483 2023-07-12 05:17:32,398 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44483 2023-07-12 05:17:32,399 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44483 2023-07-12 05:17:32,401 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44483 2023-07-12 05:17:32,402 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44483 2023-07-12 05:17:32,404 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:32,404 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:32,404 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:32,405 INFO [Listener at localhost.localdomain/42409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 05:17:32,405 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:32,405 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:32,406 INFO [Listener at localhost.localdomain/42409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:32,406 INFO [Listener at localhost.localdomain/42409] http.HttpServer(1146): Jetty bound to port 41667 2023-07-12 05:17:32,406 INFO [Listener at localhost.localdomain/42409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:32,410 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,411 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5837a87a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:32,411 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,412 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d3bf816{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:32,521 INFO [Listener at localhost.localdomain/42409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:32,522 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:32,523 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:32,523 INFO [Listener at localhost.localdomain/42409] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:32,524 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,526 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4941a526{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/jetty-0_0_0_0-41667-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2894260315205906008/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 05:17:32,528 INFO [Listener at localhost.localdomain/42409] server.AbstractConnector(333): Started ServerConnector@7c97e66a{HTTP/1.1, (http/1.1)}{0.0.0.0:41667} 2023-07-12 05:17:32,528 INFO [Listener at localhost.localdomain/42409] server.Server(415): Started @36458ms 2023-07-12 05:17:32,528 INFO [Listener at localhost.localdomain/42409] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca, hbase.cluster.distributed=false 2023-07-12 05:17:32,544 INFO [Listener at localhost.localdomain/42409] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:32,544 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,545 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,545 INFO [Listener at localhost.localdomain/42409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:32,545 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,545 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:32,545 INFO [Listener at localhost.localdomain/42409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:32,546 INFO [Listener at localhost.localdomain/42409] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45775 2023-07-12 05:17:32,547 INFO [Listener at localhost.localdomain/42409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:32,550 DEBUG [Listener at localhost.localdomain/42409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:32,551 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,552 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,553 INFO [Listener at localhost.localdomain/42409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45775 connecting to ZooKeeper ensemble=127.0.0.1:63349 2023-07-12 05:17:32,557 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:457750x0, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:32,558 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:457750x0, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:32,559 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45775-0x1007f9cfb890001 connected 2023-07-12 05:17:32,559 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:32,560 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:32,562 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45775 2023-07-12 05:17:32,563 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45775 2023-07-12 05:17:32,566 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45775 2023-07-12 05:17:32,567 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45775 2023-07-12 05:17:32,567 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45775 2023-07-12 05:17:32,570 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:32,570 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:32,570 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:32,570 INFO [Listener at localhost.localdomain/42409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:32,571 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:32,571 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:32,571 INFO [Listener at localhost.localdomain/42409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:32,572 INFO [Listener at localhost.localdomain/42409] http.HttpServer(1146): Jetty bound to port 34273 2023-07-12 05:17:32,572 INFO [Listener at localhost.localdomain/42409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:32,574 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,574 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6d5f7bf2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:32,575 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,575 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@73c04d86{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:32,691 INFO [Listener at localhost.localdomain/42409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:32,692 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:32,692 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:32,693 INFO [Listener at localhost.localdomain/42409] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 05:17:32,695 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,696 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@63e98841{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/jetty-0_0_0_0-34273-hbase-server-2_4_18-SNAPSHOT_jar-_-any-846426931810478224/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:32,697 INFO [Listener at localhost.localdomain/42409] server.AbstractConnector(333): Started ServerConnector@7ab741a2{HTTP/1.1, (http/1.1)}{0.0.0.0:34273} 2023-07-12 05:17:32,698 INFO [Listener at localhost.localdomain/42409] server.Server(415): Started @36628ms 2023-07-12 05:17:32,710 INFO [Listener at localhost.localdomain/42409] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:32,711 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,711 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,711 INFO [Listener at localhost.localdomain/42409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:32,711 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,711 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:32,711 INFO [Listener at localhost.localdomain/42409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:32,713 INFO [Listener at localhost.localdomain/42409] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45183 2023-07-12 05:17:32,713 INFO [Listener at localhost.localdomain/42409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:32,719 DEBUG [Listener at localhost.localdomain/42409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:32,720 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,721 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,722 INFO [Listener at localhost.localdomain/42409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45183 connecting to ZooKeeper ensemble=127.0.0.1:63349 2023-07-12 05:17:32,726 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:451830x0, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:32,727 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:451830x0, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:32,728 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45183-0x1007f9cfb890002 connected 2023-07-12 05:17:32,728 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:32,728 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:32,730 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45183 2023-07-12 05:17:32,731 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45183 2023-07-12 05:17:32,731 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45183 2023-07-12 05:17:32,731 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45183 2023-07-12 05:17:32,732 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45183 2023-07-12 05:17:32,734 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:32,734 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:32,734 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:32,735 INFO [Listener at localhost.localdomain/42409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:32,735 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:32,735 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:32,735 INFO [Listener at localhost.localdomain/42409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:32,736 INFO [Listener at localhost.localdomain/42409] http.HttpServer(1146): Jetty bound to port 36077 2023-07-12 05:17:32,736 INFO [Listener at localhost.localdomain/42409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:32,742 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,743 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@8bfb4d5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:32,743 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,743 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@28745c0c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:32,857 INFO [Listener at localhost.localdomain/42409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:32,858 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:32,858 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:32,859 INFO [Listener at localhost.localdomain/42409] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:32,865 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,867 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3c96a83e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/jetty-0_0_0_0-36077-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4053261130491773358/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:32,869 INFO [Listener at localhost.localdomain/42409] server.AbstractConnector(333): Started ServerConnector@2fce9c34{HTTP/1.1, (http/1.1)}{0.0.0.0:36077} 2023-07-12 05:17:32,869 INFO [Listener at localhost.localdomain/42409] server.Server(415): Started @36799ms 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:32,880 INFO [Listener at localhost.localdomain/42409] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:32,882 INFO [Listener at localhost.localdomain/42409] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45255 2023-07-12 05:17:32,882 INFO [Listener at localhost.localdomain/42409] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:32,890 DEBUG [Listener at localhost.localdomain/42409] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:32,891 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,892 INFO [Listener at localhost.localdomain/42409] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:32,894 INFO [Listener at localhost.localdomain/42409] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45255 connecting to ZooKeeper ensemble=127.0.0.1:63349 2023-07-12 05:17:32,898 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:452550x0, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:32,899 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:32,899 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45255-0x1007f9cfb890003 connected 2023-07-12 05:17:32,900 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:32,900 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ZKUtil(164): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:32,906 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45255 2023-07-12 05:17:32,910 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45255 2023-07-12 05:17:32,912 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45255 2023-07-12 05:17:32,915 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45255 2023-07-12 05:17:32,915 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45255 2023-07-12 05:17:32,917 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:32,917 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:32,917 INFO [Listener at localhost.localdomain/42409] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:32,918 INFO [Listener at localhost.localdomain/42409] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:32,918 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:32,918 INFO [Listener at localhost.localdomain/42409] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:32,918 INFO [Listener at localhost.localdomain/42409] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:32,919 INFO [Listener at localhost.localdomain/42409] http.HttpServer(1146): Jetty bound to port 45619 2023-07-12 05:17:32,919 INFO [Listener at localhost.localdomain/42409] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:32,925 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,925 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24ce9aa0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:32,926 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:32,926 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55578c3e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:33,033 INFO [Listener at localhost.localdomain/42409] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:33,035 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:33,035 INFO [Listener at localhost.localdomain/42409] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:33,035 INFO [Listener at localhost.localdomain/42409] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:33,036 INFO [Listener at localhost.localdomain/42409] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:33,038 INFO [Listener at localhost.localdomain/42409] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@614a7c65{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/java.io.tmpdir/jetty-0_0_0_0-45619-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3021453679032371277/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:33,039 INFO [Listener at localhost.localdomain/42409] server.AbstractConnector(333): Started ServerConnector@6902cbe7{HTTP/1.1, (http/1.1)}{0.0.0.0:45619} 2023-07-12 05:17:33,039 INFO [Listener at localhost.localdomain/42409] server.Server(415): Started @36969ms 2023-07-12 05:17:33,041 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:33,046 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@46f3c937{HTTP/1.1, (http/1.1)}{0.0.0.0:43841} 2023-07-12 05:17:33,046 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @36976ms 2023-07-12 05:17:33,046 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,048 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 05:17:33,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,049 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:33,049 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:33,049 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:33,049 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:33,050 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,051 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:33,052 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:33,052 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44483,1689139052348 from backup master directory 2023-07-12 05:17:33,053 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,053 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:33,053 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 05:17:33,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,085 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/hbase.id with ID: 193e4d8d-b5ce-4def-894b-d67b5005d970 2023-07-12 05:17:33,102 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:33,105 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,131 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x14b39b87 to 127.0.0.1:63349 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:33,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3618cfc6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:33,138 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:33,139 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 05:17:33,139 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:33,141 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store-tmp 2023-07-12 05:17:33,168 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:33,168 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 05:17:33,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:33,169 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:33,169 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 05:17:33,169 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:33,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:33,169 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:33,170 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/WALs/jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,174 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44483%2C1689139052348, suffix=, logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/WALs/jenkins-hbase20.apache.org,44483,1689139052348, archiveDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/oldWALs, maxLogs=10 2023-07-12 05:17:33,192 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK] 2023-07-12 05:17:33,194 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK] 2023-07-12 05:17:33,195 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK] 2023-07-12 05:17:33,223 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/WALs/jenkins-hbase20.apache.org,44483,1689139052348/jenkins-hbase20.apache.org%2C44483%2C1689139052348.1689139053175 2023-07-12 05:17:33,226 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK], DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK], DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK]] 2023-07-12 05:17:33,227 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:33,227 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:33,227 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:33,227 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:33,230 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:33,232 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 05:17:33,233 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 05:17:33,234 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,235 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:33,235 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:33,238 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:33,248 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:33,249 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11418760640, jitterRate=0.06345495581626892}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:33,249 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:33,253 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 05:17:33,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 05:17:33,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 05:17:33,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 05:17:33,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 3 msec 2023-07-12 05:17:33,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 05:17:33,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 05:17:33,260 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 05:17:33,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 05:17:33,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 05:17:33,263 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 05:17:33,263 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 05:17:33,265 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,265 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 05:17:33,265 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 05:17:33,266 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 05:17:33,267 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:33,267 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:33,267 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:33,267 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:33,267 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,268 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44483,1689139052348, sessionid=0x1007f9cfb890000, setting cluster-up flag (Was=false) 2023-07-12 05:17:33,285 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,288 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 05:17:33,288 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,291 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,294 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 05:17:33,294 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:33,295 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.hbase-snapshot/.tmp 2023-07-12 05:17:33,301 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 05:17:33,302 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 05:17:33,302 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 05:17:33,303 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:33,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 05:17:33,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 05:17:33,304 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:33,316 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 05:17:33,316 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 05:17:33,316 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 05:17:33,316 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:33,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689139083319 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 05:17:33,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,322 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:33,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 05:17:33,323 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 05:17:33,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 05:17:33,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 05:17:33,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 05:17:33,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 05:17:33,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139053323,5,FailOnTimeoutGroup] 2023-07-12 05:17:33,324 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139053323,5,FailOnTimeoutGroup] 2023-07-12 05:17:33,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 05:17:33,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,324 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:33,335 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:33,336 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:33,336 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca 2023-07-12 05:17:33,343 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(951): ClusterId : 193e4d8d-b5ce-4def-894b-d67b5005d970 2023-07-12 05:17:33,343 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(951): ClusterId : 193e4d8d-b5ce-4def-894b-d67b5005d970 2023-07-12 05:17:33,343 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(951): ClusterId : 193e4d8d-b5ce-4def-894b-d67b5005d970 2023-07-12 05:17:33,345 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:33,344 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:33,345 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:33,347 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:33,347 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:33,347 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:33,347 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:33,347 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:33,347 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:33,348 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:33,348 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:33,348 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:33,353 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ReadOnlyZKClient(139): Connect 0x7445cfab to 127.0.0.1:63349 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:33,353 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ReadOnlyZKClient(139): Connect 0x64111010 to 127.0.0.1:63349 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:33,353 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ReadOnlyZKClient(139): Connect 0x7b4fad94 to 127.0.0.1:63349 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:33,368 DEBUG [RS:1;jenkins-hbase20:45183] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a822f53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:33,368 DEBUG [RS:1;jenkins-hbase20:45183] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ebda4db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:33,369 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:33,369 DEBUG [RS:0;jenkins-hbase20:45775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ee97b60, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:33,369 DEBUG [RS:2;jenkins-hbase20:45255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75885c8d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:33,369 DEBUG [RS:0;jenkins-hbase20:45775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2bebe8d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:33,369 DEBUG [RS:2;jenkins-hbase20:45255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71b7c6e4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:33,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 05:17:33,372 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/info 2023-07-12 05:17:33,373 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 05:17:33,373 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,373 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 05:17:33,375 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:33,375 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 05:17:33,376 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,376 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 05:17:33,378 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/table 2023-07-12 05:17:33,378 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 05:17:33,378 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,379 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:45255 2023-07-12 05:17:33,379 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:45183 2023-07-12 05:17:33,379 INFO [RS:2;jenkins-hbase20:45255] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:33,379 INFO [RS:2;jenkins-hbase20:45255] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:33,379 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:45775 2023-07-12 05:17:33,380 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:33,379 INFO [RS:1;jenkins-hbase20:45183] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:33,380 INFO [RS:1;jenkins-hbase20:45183] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:33,380 INFO [RS:0;jenkins-hbase20:45775] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:33,380 INFO [RS:0;jenkins-hbase20:45775] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:33,380 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:33,380 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:33,380 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,44483,1689139052348 with isa=jenkins-hbase20.apache.org/148.251.75.209:45255, startcode=1689139052879 2023-07-12 05:17:33,380 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,44483,1689139052348 with isa=jenkins-hbase20.apache.org/148.251.75.209:45775, startcode=1689139052544 2023-07-12 05:17:33,380 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,44483,1689139052348 with isa=jenkins-hbase20.apache.org/148.251.75.209:45183, startcode=1689139052710 2023-07-12 05:17:33,380 DEBUG [RS:0;jenkins-hbase20:45775] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:33,381 DEBUG [RS:1;jenkins-hbase20:45183] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:33,380 DEBUG [RS:2;jenkins-hbase20:45255] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:33,389 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56289, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:33,389 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740 2023-07-12 05:17:33,390 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39367, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:33,390 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59271, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:33,390 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740 2023-07-12 05:17:33,392 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44483] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,392 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:33,393 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 05:17:33,393 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44483] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,393 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:33,393 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 05:17:33,393 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44483] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,393 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:33,393 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 05:17:33,394 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca 2023-07-12 05:17:33,394 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36357 2023-07-12 05:17:33,394 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41667 2023-07-12 05:17:33,395 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:33,395 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ZKUtil(162): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,396 WARN [RS:2;jenkins-hbase20:45255] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:33,396 INFO [RS:2;jenkins-hbase20:45255] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:33,397 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,399 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca 2023-07-12 05:17:33,399 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 05:17:33,399 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca 2023-07-12 05:17:33,399 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36357 2023-07-12 05:17:33,401 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41667 2023-07-12 05:17:33,401 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45255,1689139052879] 2023-07-12 05:17:33,399 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36357 2023-07-12 05:17:33,401 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41667 2023-07-12 05:17:33,402 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 05:17:33,412 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:33,415 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:33,417 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ZKUtil(162): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,418 WARN [RS:0;jenkins-hbase20:45775] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:33,418 INFO [RS:0;jenkins-hbase20:45775] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:33,418 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,418 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10613564000, jitterRate=-0.011534824967384338}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 05:17:33,418 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 05:17:33,419 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 05:17:33,419 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 05:17:33,419 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 05:17:33,419 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ZKUtil(162): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,419 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 05:17:33,419 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ZKUtil(162): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,419 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 05:17:33,419 WARN [RS:1;jenkins-hbase20:45183] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:33,419 INFO [RS:1;jenkins-hbase20:45183] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:33,419 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ZKUtil(162): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,419 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,420 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ZKUtil(162): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,421 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:33,421 INFO [RS:2;jenkins-hbase20:45255] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:33,421 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45775,1689139052544] 2023-07-12 05:17:33,421 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45183,1689139052710] 2023-07-12 05:17:33,431 INFO [RS:2;jenkins-hbase20:45255] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:33,435 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:33,435 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 05:17:33,436 INFO [RS:2;jenkins-hbase20:45255] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:33,436 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,437 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:33,437 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 05:17:33,437 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 05:17:33,443 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 05:17:33,444 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:33,447 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,447 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,447 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,448 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,448 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,448 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,449 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:33,449 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,449 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 05:17:33,449 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,450 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,450 DEBUG [RS:2;jenkins-hbase20:45255] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,450 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ZKUtil(162): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,451 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ZKUtil(162): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,451 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ZKUtil(162): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,452 DEBUG [RS:0;jenkins-hbase20:45775] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:33,452 INFO [RS:0;jenkins-hbase20:45775] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:33,453 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ZKUtil(162): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,454 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ZKUtil(162): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,454 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ZKUtil(162): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,455 DEBUG [RS:1;jenkins-hbase20:45183] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:33,455 INFO [RS:1;jenkins-hbase20:45183] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:33,460 INFO [RS:1;jenkins-hbase20:45183] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:33,460 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,460 INFO [RS:0;jenkins-hbase20:45775] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:33,460 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,460 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,460 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,467 INFO [RS:1;jenkins-hbase20:45183] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:33,467 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,467 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:33,471 INFO [RS:0;jenkins-hbase20:45775] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:33,472 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,472 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:33,473 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,473 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,473 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:33,474 DEBUG [RS:1;jenkins-hbase20:45183] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,474 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,475 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,475 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,475 DEBUG [RS:0;jenkins-hbase20:45775] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:33,484 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,484 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,486 INFO [RS:2;jenkins-hbase20:45255] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:33,486 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45255,1689139052879-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,498 INFO [RS:0;jenkins-hbase20:45775] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:33,498 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45775,1689139052544-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,500 INFO [RS:2;jenkins-hbase20:45255] regionserver.Replication(203): jenkins-hbase20.apache.org,45255,1689139052879 started 2023-07-12 05:17:33,500 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45255,1689139052879, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45255, sessionid=0x1007f9cfb890003 2023-07-12 05:17:33,500 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:33,500 DEBUG [RS:2;jenkins-hbase20:45255] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,500 INFO [RS:1;jenkins-hbase20:45183] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:33,500 DEBUG [RS:2;jenkins-hbase20:45255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45255,1689139052879' 2023-07-12 05:17:33,500 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45183,1689139052710-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,500 DEBUG [RS:2;jenkins-hbase20:45255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45255,1689139052879' 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:33,501 DEBUG [RS:2;jenkins-hbase20:45255] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:33,502 DEBUG [RS:2;jenkins-hbase20:45255] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:33,502 INFO [RS:2;jenkins-hbase20:45255] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 05:17:33,504 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,504 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ZKUtil(398): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 05:17:33,504 INFO [RS:2;jenkins-hbase20:45255] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 05:17:33,505 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,505 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,510 INFO [RS:0;jenkins-hbase20:45775] regionserver.Replication(203): jenkins-hbase20.apache.org,45775,1689139052544 started 2023-07-12 05:17:33,510 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45775,1689139052544, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45775, sessionid=0x1007f9cfb890001 2023-07-12 05:17:33,510 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:33,510 DEBUG [RS:0;jenkins-hbase20:45775] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,510 DEBUG [RS:0;jenkins-hbase20:45775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45775,1689139052544' 2023-07-12 05:17:33,511 DEBUG [RS:0;jenkins-hbase20:45775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:33,512 DEBUG [RS:0;jenkins-hbase20:45775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:33,515 INFO [RS:1;jenkins-hbase20:45183] regionserver.Replication(203): jenkins-hbase20.apache.org,45183,1689139052710 started 2023-07-12 05:17:33,515 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:33,515 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45183,1689139052710, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45183, sessionid=0x1007f9cfb890002 2023-07-12 05:17:33,515 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:33,515 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:33,515 DEBUG [RS:0;jenkins-hbase20:45775] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:33,515 DEBUG [RS:1;jenkins-hbase20:45183] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,515 DEBUG [RS:0;jenkins-hbase20:45775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45775,1689139052544' 2023-07-12 05:17:33,515 DEBUG [RS:1;jenkins-hbase20:45183] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45183,1689139052710' 2023-07-12 05:17:33,515 DEBUG [RS:1;jenkins-hbase20:45183] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:33,515 DEBUG [RS:0;jenkins-hbase20:45775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:33,515 DEBUG [RS:0;jenkins-hbase20:45775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:33,515 DEBUG [RS:1;jenkins-hbase20:45183] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:33,516 DEBUG [RS:0;jenkins-hbase20:45775] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45183,1689139052710' 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:33,516 INFO [RS:0;jenkins-hbase20:45775] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 05:17:33,516 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:33,516 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ZKUtil(398): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 05:17:33,516 INFO [RS:0;jenkins-hbase20:45775] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 05:17:33,516 DEBUG [RS:1;jenkins-hbase20:45183] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:33,517 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,517 INFO [RS:1;jenkins-hbase20:45183] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 05:17:33,517 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,517 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,518 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ZKUtil(398): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 05:17:33,518 INFO [RS:1;jenkins-hbase20:45183] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 05:17:33,518 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,518 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,600 DEBUG [jenkins-hbase20:44483] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 05:17:33,600 DEBUG [jenkins-hbase20:44483] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:33,600 DEBUG [jenkins-hbase20:44483] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:33,600 DEBUG [jenkins-hbase20:44483] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:33,600 DEBUG [jenkins-hbase20:44483] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:33,600 DEBUG [jenkins-hbase20:44483] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:33,602 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45255,1689139052879, state=OPENING 2023-07-12 05:17:33,603 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 05:17:33,604 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:33,604 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45255,1689139052879}] 2023-07-12 05:17:33,604 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 05:17:33,609 INFO [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45255%2C1689139052879, suffix=, logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45255,1689139052879, archiveDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs, maxLogs=32 2023-07-12 05:17:33,618 WARN [ReadOnlyZKClient-127.0.0.1:63349@0x14b39b87] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 05:17:33,619 INFO [RS:0;jenkins-hbase20:45775] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45775%2C1689139052544, suffix=, logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45775,1689139052544, archiveDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs, maxLogs=32 2023-07-12 05:17:33,619 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:33,621 INFO [RS:1;jenkins-hbase20:45183] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45183%2C1689139052710, suffix=, logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45183,1689139052710, archiveDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs, maxLogs=32 2023-07-12 05:17:33,627 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58368, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:33,627 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45255] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:58368 deadline: 1689139113627, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,648 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK] 2023-07-12 05:17:33,648 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK] 2023-07-12 05:17:33,648 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK] 2023-07-12 05:17:33,651 INFO [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45255,1689139052879/jenkins-hbase20.apache.org%2C45255%2C1689139052879.1689139053610 2023-07-12 05:17:33,652 DEBUG [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK], DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK], DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK]] 2023-07-12 05:17:33,656 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK] 2023-07-12 05:17:33,656 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK] 2023-07-12 05:17:33,656 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK] 2023-07-12 05:17:33,664 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK] 2023-07-12 05:17:33,664 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK] 2023-07-12 05:17:33,664 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK] 2023-07-12 05:17:33,666 INFO [RS:0;jenkins-hbase20:45775] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45775,1689139052544/jenkins-hbase20.apache.org%2C45775%2C1689139052544.1689139053619 2023-07-12 05:17:33,669 DEBUG [RS:0;jenkins-hbase20:45775] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK], DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK], DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK]] 2023-07-12 05:17:33,671 INFO [RS:1;jenkins-hbase20:45183] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45183,1689139052710/jenkins-hbase20.apache.org%2C45183%2C1689139052710.1689139053627 2023-07-12 05:17:33,675 DEBUG [RS:1;jenkins-hbase20:45183] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK], DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK], DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK]] 2023-07-12 05:17:33,759 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,761 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:33,765 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58374, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:33,771 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 05:17:33,771 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:33,773 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45255%2C1689139052879.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45255,1689139052879, archiveDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs, maxLogs=32 2023-07-12 05:17:33,796 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK] 2023-07-12 05:17:33,796 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK] 2023-07-12 05:17:33,798 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK] 2023-07-12 05:17:33,803 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/WALs/jenkins-hbase20.apache.org,45255,1689139052879/jenkins-hbase20.apache.org%2C45255%2C1689139052879.meta.1689139053774.meta 2023-07-12 05:17:33,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36135,DS-3c6d5fad-8d6d-4659-ba32-2b152b2ffd6b,DISK], DatanodeInfoWithStorage[127.0.0.1:43295,DS-54ff7283-56f4-4d08-88ce-7e645a3a2740,DISK], DatanodeInfoWithStorage[127.0.0.1:38109,DS-e40953bd-633e-436c-ad31-337296b3d648,DISK]] 2023-07-12 05:17:33,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:33,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:33,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 05:17:33,804 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 05:17:33,804 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 05:17:33,804 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:33,804 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 05:17:33,804 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 05:17:33,806 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 05:17:33,807 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/info 2023-07-12 05:17:33,807 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/info 2023-07-12 05:17:33,807 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 05:17:33,808 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,808 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 05:17:33,809 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:33,809 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:33,810 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 05:17:33,810 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,811 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 05:17:33,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/table 2023-07-12 05:17:33,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/table 2023-07-12 05:17:33,812 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 05:17:33,814 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:33,815 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740 2023-07-12 05:17:33,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740 2023-07-12 05:17:33,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 05:17:33,821 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 05:17:33,823 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10371757760, jitterRate=-0.03405478596687317}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 05:17:33,823 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 05:17:33,824 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689139053759 2023-07-12 05:17:33,829 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 05:17:33,830 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 05:17:33,831 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45255,1689139052879, state=OPEN 2023-07-12 05:17:33,832 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 05:17:33,832 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 05:17:33,833 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 05:17:33,833 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45255,1689139052879 in 228 msec 2023-07-12 05:17:33,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 05:17:33,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 396 msec 2023-07-12 05:17:33,836 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 532 msec 2023-07-12 05:17:33,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689139053836, completionTime=-1 2023-07-12 05:17:33,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 05:17:33,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 05:17:33,840 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 05:17:33,840 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689139113840 2023-07-12 05:17:33,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689139173840 2023-07-12 05:17:33,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-12 05:17:33,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44483,1689139052348-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44483,1689139052348-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44483,1689139052348-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44483, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:33,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 05:17:33,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:33,852 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 05:17:33,852 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 05:17:33,854 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:33,855 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:33,857 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:33,857 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f empty. 2023-07-12 05:17:33,858 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:33,858 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 05:17:33,878 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:33,880 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8d7db99eb801bd15082adcf4cabefa8f, NAME => 'hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp 2023-07-12 05:17:33,894 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:33,894 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8d7db99eb801bd15082adcf4cabefa8f, disabling compactions & flushes 2023-07-12 05:17:33,894 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:33,894 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:33,895 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. after waiting 0 ms 2023-07-12 05:17:33,895 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:33,895 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:33,895 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8d7db99eb801bd15082adcf4cabefa8f: 2023-07-12 05:17:33,897 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:33,899 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139053899"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139053899"}]},"ts":"1689139053899"} 2023-07-12 05:17:33,903 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:33,904 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:33,904 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139053904"}]},"ts":"1689139053904"} 2023-07-12 05:17:33,906 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 05:17:33,908 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:33,908 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:33,908 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:33,908 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:33,908 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:33,909 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8d7db99eb801bd15082adcf4cabefa8f, ASSIGN}] 2023-07-12 05:17:33,912 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8d7db99eb801bd15082adcf4cabefa8f, ASSIGN 2023-07-12 05:17:33,913 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8d7db99eb801bd15082adcf4cabefa8f, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45255,1689139052879; forceNewPlan=false, retain=false 2023-07-12 05:17:33,930 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:33,932 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 05:17:33,934 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:33,935 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:33,936 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:33,937 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5 empty. 2023-07-12 05:17:33,937 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:33,938 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 05:17:33,951 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:33,953 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4016c8687e7bf933eac79b515aa2bea5, NAME => 'hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp 2023-07-12 05:17:33,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:33,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 4016c8687e7bf933eac79b515aa2bea5, disabling compactions & flushes 2023-07-12 05:17:33,966 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:33,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:33,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. after waiting 0 ms 2023-07-12 05:17:33,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:33,966 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:33,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 4016c8687e7bf933eac79b515aa2bea5: 2023-07-12 05:17:33,969 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:33,970 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139053970"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139053970"}]},"ts":"1689139053970"} 2023-07-12 05:17:33,971 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:33,973 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:33,973 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139053973"}]},"ts":"1689139053973"} 2023-07-12 05:17:33,974 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 05:17:33,976 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:33,977 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:33,977 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:33,977 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:33,977 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:33,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4016c8687e7bf933eac79b515aa2bea5, ASSIGN}] 2023-07-12 05:17:33,979 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4016c8687e7bf933eac79b515aa2bea5, ASSIGN 2023-07-12 05:17:33,984 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=4016c8687e7bf933eac79b515aa2bea5, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45255,1689139052879; forceNewPlan=false, retain=false 2023-07-12 05:17:33,984 INFO [jenkins-hbase20:44483] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 05:17:33,986 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8d7db99eb801bd15082adcf4cabefa8f, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,986 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139053986"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139053986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139053986"}]},"ts":"1689139053986"} 2023-07-12 05:17:33,987 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=4016c8687e7bf933eac79b515aa2bea5, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:33,987 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139053987"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139053987"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139053987"}]},"ts":"1689139053987"} 2023-07-12 05:17:33,990 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 8d7db99eb801bd15082adcf4cabefa8f, server=jenkins-hbase20.apache.org,45255,1689139052879}] 2023-07-12 05:17:33,991 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 4016c8687e7bf933eac79b515aa2bea5, server=jenkins-hbase20.apache.org,45255,1689139052879}] 2023-07-12 05:17:34,150 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:34,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4016c8687e7bf933eac79b515aa2bea5, NAME => 'hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:34,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:34,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. service=MultiRowMutationService 2023-07-12 05:17:34,151 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 05:17:34,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:34,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,156 INFO [StoreOpener-4016c8687e7bf933eac79b515aa2bea5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,158 DEBUG [StoreOpener-4016c8687e7bf933eac79b515aa2bea5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/m 2023-07-12 05:17:34,158 DEBUG [StoreOpener-4016c8687e7bf933eac79b515aa2bea5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/m 2023-07-12 05:17:34,159 INFO [StoreOpener-4016c8687e7bf933eac79b515aa2bea5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4016c8687e7bf933eac79b515aa2bea5 columnFamilyName m 2023-07-12 05:17:34,160 INFO [StoreOpener-4016c8687e7bf933eac79b515aa2bea5-1] regionserver.HStore(310): Store=4016c8687e7bf933eac79b515aa2bea5/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:34,162 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,162 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,168 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:34,170 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:34,171 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 4016c8687e7bf933eac79b515aa2bea5; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7d693246, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:34,171 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 4016c8687e7bf933eac79b515aa2bea5: 2023-07-12 05:17:34,172 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5., pid=9, masterSystemTime=1689139054145 2023-07-12 05:17:34,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:34,176 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:34,176 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:34,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8d7db99eb801bd15082adcf4cabefa8f, NAME => 'hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:34,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,177 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:34,177 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,177 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,177 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=4016c8687e7bf933eac79b515aa2bea5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:34,177 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139054177"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139054177"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139054177"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139054177"}]},"ts":"1689139054177"} 2023-07-12 05:17:34,181 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 05:17:34,181 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 4016c8687e7bf933eac79b515aa2bea5, server=jenkins-hbase20.apache.org,45255,1689139052879 in 189 msec 2023-07-12 05:17:34,183 INFO [StoreOpener-8d7db99eb801bd15082adcf4cabefa8f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,184 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-12 05:17:34,184 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=4016c8687e7bf933eac79b515aa2bea5, ASSIGN in 204 msec 2023-07-12 05:17:34,186 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:34,186 DEBUG [StoreOpener-8d7db99eb801bd15082adcf4cabefa8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/info 2023-07-12 05:17:34,186 DEBUG [StoreOpener-8d7db99eb801bd15082adcf4cabefa8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/info 2023-07-12 05:17:34,186 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139054186"}]},"ts":"1689139054186"} 2023-07-12 05:17:34,186 INFO [StoreOpener-8d7db99eb801bd15082adcf4cabefa8f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8d7db99eb801bd15082adcf4cabefa8f columnFamilyName info 2023-07-12 05:17:34,188 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 05:17:34,188 INFO [StoreOpener-8d7db99eb801bd15082adcf4cabefa8f-1] regionserver.HStore(310): Store=8d7db99eb801bd15082adcf4cabefa8f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:34,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,190 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:34,192 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 261 msec 2023-07-12 05:17:34,198 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:34,203 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:34,204 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8d7db99eb801bd15082adcf4cabefa8f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10383316800, jitterRate=-0.03297826647758484}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:34,204 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8d7db99eb801bd15082adcf4cabefa8f: 2023-07-12 05:17:34,205 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f., pid=8, masterSystemTime=1689139054145 2023-07-12 05:17:34,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:34,210 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:34,211 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8d7db99eb801bd15082adcf4cabefa8f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:34,212 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139054211"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139054211"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139054211"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139054211"}]},"ts":"1689139054211"} 2023-07-12 05:17:34,216 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-12 05:17:34,216 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 8d7db99eb801bd15082adcf4cabefa8f, server=jenkins-hbase20.apache.org,45255,1689139052879 in 226 msec 2023-07-12 05:17:34,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 05:17:34,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8d7db99eb801bd15082adcf4cabefa8f, ASSIGN in 307 msec 2023-07-12 05:17:34,219 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:34,219 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139054219"}]},"ts":"1689139054219"} 2023-07-12 05:17:34,220 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 05:17:34,222 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:34,224 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 372 msec 2023-07-12 05:17:34,237 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 05:17:34,237 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 05:17:34,242 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:34,242 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:34,243 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:34,244 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44483,1689139052348] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 05:17:34,253 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 05:17:34,257 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:34,257 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:34,265 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 05:17:34,273 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:34,275 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-12 05:17:34,276 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 05:17:34,285 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:34,294 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 05:17:34,296 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-12 05:17:34,304 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 05:17:34,319 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 05:17:34,319 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.265sec 2023-07-12 05:17:34,319 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 05:17:34,319 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:34,322 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 05:17:34,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 05:17:34,326 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:34,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 05:17:34,329 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:34,333 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,336 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34 empty. 2023-07-12 05:17:34,337 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,337 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 05:17:34,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 05:17:34,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 05:17:34,345 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ReadOnlyZKClient(139): Connect 0x7bcdbec4 to 127.0.0.1:63349 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:34,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:34,347 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:34,347 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 05:17:34,348 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 05:17:34,348 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44483,1689139052348-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 05:17:34,348 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44483,1689139052348-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 05:17:34,374 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 05:17:34,377 DEBUG [Listener at localhost.localdomain/42409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f9f17cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:34,380 DEBUG [hconnection-0x320cc13b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:34,392 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:34,393 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:34,394 INFO [Listener at localhost.localdomain/42409] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:34,398 DEBUG [Listener at localhost.localdomain/42409] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 05:17:34,402 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 05:17:34,403 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:34,405 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 05:17:34,405 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:34,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-12 05:17:34,407 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ReadOnlyZKClient(139): Connect 0x51dcc376 to 127.0.0.1:63349 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:34,414 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 481b0d08dd1f52cccd48f640e8568d34, NAME => 'hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp 2023-07-12 05:17:34,425 DEBUG [Listener at localhost.localdomain/42409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50dc56f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:34,426 INFO [Listener at localhost.localdomain/42409] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63349 2023-07-12 05:17:34,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-12 05:17:34,436 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:34,437 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:34,437 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 481b0d08dd1f52cccd48f640e8568d34, disabling compactions & flushes 2023-07-12 05:17:34,437 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,437 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,437 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. after waiting 0 ms 2023-07-12 05:17:34,437 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,437 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,437 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 481b0d08dd1f52cccd48f640e8568d34: 2023-07-12 05:17:34,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1007f9cfb89000a connected 2023-07-12 05:17:34,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-12 05:17:34,452 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:34,454 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689139054454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139054454"}]},"ts":"1689139054454"} 2023-07-12 05:17:34,455 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:34,457 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:34,457 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139054457"}]},"ts":"1689139054457"} 2023-07-12 05:17:34,460 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 05:17:34,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-12 05:17:34,466 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:34,469 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:34,469 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:34,470 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:34,470 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:34,470 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:34,470 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=481b0d08dd1f52cccd48f640e8568d34, ASSIGN}] 2023-07-12 05:17:34,471 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 27 msec 2023-07-12 05:17:34,471 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=481b0d08dd1f52cccd48f640e8568d34, ASSIGN 2023-07-12 05:17:34,472 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=481b0d08dd1f52cccd48f640e8568d34, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45255,1689139052879; forceNewPlan=false, retain=false 2023-07-12 05:17:34,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-12 05:17:34,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:34,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-12 05:17:34,572 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:34,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-12 05:17:34,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:34,575 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:34,575 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:34,578 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:34,580 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73 2023-07-12 05:17:34,581 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73 empty. 2023-07-12 05:17:34,581 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73 2023-07-12 05:17:34,581 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 05:17:34,612 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:34,613 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8f6d548417b37332595fba2509324a73, NAME => 'np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp 2023-07-12 05:17:34,622 INFO [jenkins-hbase20:44483] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:34,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=481b0d08dd1f52cccd48f640e8568d34, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:34,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689139054624"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139054624"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139054624"}]},"ts":"1689139054624"} 2023-07-12 05:17:34,625 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:34,625 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 8f6d548417b37332595fba2509324a73, disabling compactions & flushes 2023-07-12 05:17:34,625 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:34,625 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:34,625 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. after waiting 0 ms 2023-07-12 05:17:34,625 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:34,625 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:34,625 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 8f6d548417b37332595fba2509324a73: 2023-07-12 05:17:34,628 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=14, state=RUNNABLE; OpenRegionProcedure 481b0d08dd1f52cccd48f640e8568d34, server=jenkins-hbase20.apache.org,45255,1689139052879}] 2023-07-12 05:17:34,628 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:34,629 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139054629"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139054629"}]},"ts":"1689139054629"} 2023-07-12 05:17:34,630 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:34,631 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:34,631 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139054631"}]},"ts":"1689139054631"} 2023-07-12 05:17:34,633 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-12 05:17:34,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:34,783 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,783 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 481b0d08dd1f52cccd48f640e8568d34, NAME => 'hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:34,784 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,784 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:34,784 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,784 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,787 INFO [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,788 DEBUG [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34/q 2023-07-12 05:17:34,788 DEBUG [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34/q 2023-07-12 05:17:34,788 INFO [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 481b0d08dd1f52cccd48f640e8568d34 columnFamilyName q 2023-07-12 05:17:34,789 INFO [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] regionserver.HStore(310): Store=481b0d08dd1f52cccd48f640e8568d34/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:34,789 INFO [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,790 DEBUG [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34/u 2023-07-12 05:17:34,790 DEBUG [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34/u 2023-07-12 05:17:34,791 INFO [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 481b0d08dd1f52cccd48f640e8568d34 columnFamilyName u 2023-07-12 05:17:34,792 INFO [StoreOpener-481b0d08dd1f52cccd48f640e8568d34-1] regionserver.HStore(310): Store=481b0d08dd1f52cccd48f640e8568d34/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:34,792 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,795 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 05:17:34,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:34,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:34,799 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 481b0d08dd1f52cccd48f640e8568d34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9760738880, jitterRate=-0.09096035361289978}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 05:17:34,799 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 481b0d08dd1f52cccd48f640e8568d34: 2023-07-12 05:17:34,800 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34., pid=16, masterSystemTime=1689139054780 2023-07-12 05:17:34,801 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,801 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:34,801 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=481b0d08dd1f52cccd48f640e8568d34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:34,801 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689139054801"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139054801"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139054801"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139054801"}]},"ts":"1689139054801"} 2023-07-12 05:17:34,805 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=14 2023-07-12 05:17:34,805 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=14, state=SUCCESS; OpenRegionProcedure 481b0d08dd1f52cccd48f640e8568d34, server=jenkins-hbase20.apache.org,45255,1689139052879 in 175 msec 2023-07-12 05:17:34,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 05:17:34,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=481b0d08dd1f52cccd48f640e8568d34, ASSIGN in 335 msec 2023-07-12 05:17:34,809 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:34,809 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139054809"}]},"ts":"1689139054809"} 2023-07-12 05:17:34,810 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 05:17:34,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:34,910 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:34,910 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:34,910 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:34,910 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:34,910 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:34,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, ASSIGN}] 2023-07-12 05:17:34,912 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, ASSIGN 2023-07-12 05:17:34,912 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45183,1689139052710; forceNewPlan=false, retain=false 2023-07-12 05:17:34,914 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:34,916 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 595 msec 2023-07-12 05:17:35,062 INFO [jenkins-hbase20:44483] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:35,064 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8f6d548417b37332595fba2509324a73, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:35,064 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139055064"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139055064"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139055064"}]},"ts":"1689139055064"} 2023-07-12 05:17:35,066 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 8f6d548417b37332595fba2509324a73, server=jenkins-hbase20.apache.org,45183,1689139052710}] 2023-07-12 05:17:35,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:35,219 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:35,219 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:35,221 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60800, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:35,227 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f6d548417b37332595fba2509324a73, NAME => 'np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:35,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:35,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,229 INFO [StoreOpener-8f6d548417b37332595fba2509324a73-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,231 DEBUG [StoreOpener-8f6d548417b37332595fba2509324a73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/np1/table1/8f6d548417b37332595fba2509324a73/fam1 2023-07-12 05:17:35,231 DEBUG [StoreOpener-8f6d548417b37332595fba2509324a73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/np1/table1/8f6d548417b37332595fba2509324a73/fam1 2023-07-12 05:17:35,231 INFO [StoreOpener-8f6d548417b37332595fba2509324a73-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f6d548417b37332595fba2509324a73 columnFamilyName fam1 2023-07-12 05:17:35,232 INFO [StoreOpener-8f6d548417b37332595fba2509324a73-1] regionserver.HStore(310): Store=8f6d548417b37332595fba2509324a73/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:35,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/np1/table1/8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/np1/table1/8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/np1/table1/8f6d548417b37332595fba2509324a73/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:35,243 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8f6d548417b37332595fba2509324a73; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11536563520, jitterRate=0.07442620396614075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:35,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8f6d548417b37332595fba2509324a73: 2023-07-12 05:17:35,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73., pid=18, masterSystemTime=1689139055219 2023-07-12 05:17:35,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,253 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8f6d548417b37332595fba2509324a73, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:35,253 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139055253"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139055253"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139055253"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139055253"}]},"ts":"1689139055253"} 2023-07-12 05:17:35,258 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 05:17:35,258 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 8f6d548417b37332595fba2509324a73, server=jenkins-hbase20.apache.org,45183,1689139052710 in 189 msec 2023-07-12 05:17:35,261 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-12 05:17:35,261 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, ASSIGN in 347 msec 2023-07-12 05:17:35,262 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:35,263 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139055262"}]},"ts":"1689139055262"} 2023-07-12 05:17:35,270 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-12 05:17:35,273 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:35,280 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 705 msec 2023-07-12 05:17:35,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 05:17:35,679 INFO [Listener at localhost.localdomain/42409] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-12 05:17:35,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:35,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-12 05:17:35,685 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:35,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-12 05:17:35,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 05:17:35,716 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=34 msec 2023-07-12 05:17:35,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 05:17:35,796 INFO [Listener at localhost.localdomain/42409] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-12 05:17:35,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:35,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:35,799 INFO [Listener at localhost.localdomain/42409] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-12 05:17:35,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable np1:table1 2023-07-12 05:17:35,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-12 05:17:35,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 05:17:35,804 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139055804"}]},"ts":"1689139055804"} 2023-07-12 05:17:35,805 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-12 05:17:35,806 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-12 05:17:35,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, UNASSIGN}] 2023-07-12 05:17:35,808 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, UNASSIGN 2023-07-12 05:17:35,809 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=8f6d548417b37332595fba2509324a73, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:35,809 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139055809"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139055809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139055809"}]},"ts":"1689139055809"} 2023-07-12 05:17:35,810 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 8f6d548417b37332595fba2509324a73, server=jenkins-hbase20.apache.org,45183,1689139052710}] 2023-07-12 05:17:35,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 05:17:35,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8f6d548417b37332595fba2509324a73, disabling compactions & flushes 2023-07-12 05:17:35,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. after waiting 0 ms 2023-07-12 05:17:35,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/np1/table1/8f6d548417b37332595fba2509324a73/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:35,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73. 2023-07-12 05:17:35,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8f6d548417b37332595fba2509324a73: 2023-07-12 05:17:35,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 8f6d548417b37332595fba2509324a73 2023-07-12 05:17:35,976 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=8f6d548417b37332595fba2509324a73, regionState=CLOSED 2023-07-12 05:17:35,976 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139055976"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139055976"}]},"ts":"1689139055976"} 2023-07-12 05:17:35,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-12 05:17:35,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 8f6d548417b37332595fba2509324a73, server=jenkins-hbase20.apache.org,45183,1689139052710 in 167 msec 2023-07-12 05:17:35,981 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 05:17:35,981 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=8f6d548417b37332595fba2509324a73, UNASSIGN in 172 msec 2023-07-12 05:17:35,982 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139055982"}]},"ts":"1689139055982"} 2023-07-12 05:17:35,983 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-12 05:17:35,984 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-12 05:17:35,989 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 187 msec 2023-07-12 05:17:36,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 05:17:36,107 INFO [Listener at localhost.localdomain/42409] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-12 05:17:36,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete np1:table1 2023-07-12 05:17:36,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-12 05:17:36,111 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 05:17:36,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-12 05:17:36,113 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 05:17:36,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:36,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:36,180 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73 2023-07-12 05:17:36,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 05:17:36,194 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73/fam1, FileablePath, hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73/recovered.edits] 2023-07-12 05:17:36,209 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73/recovered.edits/4.seqid to hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/archive/data/np1/table1/8f6d548417b37332595fba2509324a73/recovered.edits/4.seqid 2023-07-12 05:17:36,210 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/.tmp/data/np1/table1/8f6d548417b37332595fba2509324a73 2023-07-12 05:17:36,210 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 05:17:36,214 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 05:17:36,216 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-12 05:17:36,218 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-12 05:17:36,220 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 05:17:36,220 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-12 05:17:36,220 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139056220"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:36,221 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 05:17:36,222 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8f6d548417b37332595fba2509324a73, NAME => 'np1:table1,,1689139054567.8f6d548417b37332595fba2509324a73.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 05:17:36,222 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-12 05:17:36,222 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139056222"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:36,223 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-12 05:17:36,225 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 05:17:36,226 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 117 msec 2023-07-12 05:17:36,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 05:17:36,294 INFO [Listener at localhost.localdomain/42409] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-12 05:17:36,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete np1 2023-07-12 05:17:36,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-12 05:17:36,310 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 05:17:36,315 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 05:17:36,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 05:17:36,319 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 05:17:36,320 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-12 05:17:36,320 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:36,323 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 05:17:36,325 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 05:17:36,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 25 msec 2023-07-12 05:17:36,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 05:17:36,421 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 05:17:36,421 INFO [Listener at localhost.localdomain/42409] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 05:17:36,421 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7bcdbec4 to 127.0.0.1:63349 2023-07-12 05:17:36,421 DEBUG [Listener at localhost.localdomain/42409] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,421 DEBUG [Listener at localhost.localdomain/42409] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 05:17:36,421 DEBUG [Listener at localhost.localdomain/42409] util.JVMClusterUtil(257): Found active master hash=19227730, stopped=false 2023-07-12 05:17:36,421 DEBUG [Listener at localhost.localdomain/42409] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 05:17:36,422 DEBUG [Listener at localhost.localdomain/42409] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 05:17:36,422 DEBUG [Listener at localhost.localdomain/42409] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 05:17:36,422 INFO [Listener at localhost.localdomain/42409] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:36,422 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:36,422 INFO [Listener at localhost.localdomain/42409] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 05:17:36,422 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:36,422 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:36,422 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:36,424 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:36,424 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:36,424 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:36,424 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:36,424 DEBUG [Listener at localhost.localdomain/42409] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x14b39b87 to 127.0.0.1:63349 2023-07-12 05:17:36,424 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:36,425 DEBUG [Listener at localhost.localdomain/42409] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,425 INFO [Listener at localhost.localdomain/42409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,45775,1689139052544' ***** 2023-07-12 05:17:36,425 INFO [Listener at localhost.localdomain/42409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:36,425 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:36,425 INFO [Listener at localhost.localdomain/42409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,45183,1689139052710' ***** 2023-07-12 05:17:36,429 INFO [Listener at localhost.localdomain/42409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:36,430 INFO [Listener at localhost.localdomain/42409] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,45255,1689139052879' ***** 2023-07-12 05:17:36,430 INFO [Listener at localhost.localdomain/42409] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:36,430 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:36,430 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:36,436 INFO [RS:0;jenkins-hbase20:45775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@63e98841{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:36,436 INFO [RS:2;jenkins-hbase20:45255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@614a7c65{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:36,436 INFO [RS:1;jenkins-hbase20:45183] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3c96a83e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:36,436 INFO [RS:0;jenkins-hbase20:45775] server.AbstractConnector(383): Stopped ServerConnector@7ab741a2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:36,437 INFO [RS:0;jenkins-hbase20:45775] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:36,437 INFO [RS:2;jenkins-hbase20:45255] server.AbstractConnector(383): Stopped ServerConnector@6902cbe7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:36,437 INFO [RS:1;jenkins-hbase20:45183] server.AbstractConnector(383): Stopped ServerConnector@2fce9c34{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:36,437 INFO [RS:0;jenkins-hbase20:45775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@73c04d86{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:36,437 INFO [RS:1;jenkins-hbase20:45183] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:36,437 INFO [RS:2;jenkins-hbase20:45255] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:36,439 INFO [RS:0;jenkins-hbase20:45775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6d5f7bf2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:36,439 INFO [RS:2;jenkins-hbase20:45255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55578c3e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:36,439 INFO [RS:1;jenkins-hbase20:45183] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@28745c0c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:36,440 INFO [RS:2;jenkins-hbase20:45255] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24ce9aa0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:36,440 INFO [RS:1;jenkins-hbase20:45183] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@8bfb4d5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:36,440 INFO [RS:0;jenkins-hbase20:45775] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:36,440 INFO [RS:0;jenkins-hbase20:45775] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:36,440 INFO [RS:0;jenkins-hbase20:45775] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:36,440 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:36,440 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:36,441 INFO [RS:2;jenkins-hbase20:45255] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:36,441 DEBUG [RS:0;jenkins-hbase20:45775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x64111010 to 127.0.0.1:63349 2023-07-12 05:17:36,441 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:36,441 INFO [RS:2;jenkins-hbase20:45255] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:36,441 INFO [RS:1;jenkins-hbase20:45183] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:36,441 DEBUG [RS:0;jenkins-hbase20:45775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,441 INFO [RS:1;jenkins-hbase20:45183] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:36,441 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:36,441 INFO [RS:2;jenkins-hbase20:45255] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:36,442 INFO [RS:1;jenkins-hbase20:45183] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:36,443 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:36,442 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45775,1689139052544; all regions closed. 2023-07-12 05:17:36,443 DEBUG [RS:1;jenkins-hbase20:45183] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7445cfab to 127.0.0.1:63349 2023-07-12 05:17:36,443 DEBUG [RS:0;jenkins-hbase20:45775] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 05:17:36,443 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(3305): Received CLOSE for 4016c8687e7bf933eac79b515aa2bea5 2023-07-12 05:17:36,443 DEBUG [RS:1;jenkins-hbase20:45183] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,443 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45183,1689139052710; all regions closed. 2023-07-12 05:17:36,443 DEBUG [RS:1;jenkins-hbase20:45183] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 05:17:36,443 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(3305): Received CLOSE for 481b0d08dd1f52cccd48f640e8568d34 2023-07-12 05:17:36,443 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(3305): Received CLOSE for 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:36,443 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:36,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 4016c8687e7bf933eac79b515aa2bea5, disabling compactions & flushes 2023-07-12 05:17:36,444 DEBUG [RS:2;jenkins-hbase20:45255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7b4fad94 to 127.0.0.1:63349 2023-07-12 05:17:36,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:36,444 DEBUG [RS:2;jenkins-hbase20:45255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:36,444 INFO [RS:2;jenkins-hbase20:45255] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:36,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. after waiting 0 ms 2023-07-12 05:17:36,444 INFO [RS:2;jenkins-hbase20:45255] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:36,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:36,444 INFO [RS:2;jenkins-hbase20:45255] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:36,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 4016c8687e7bf933eac79b515aa2bea5 1/1 column families, dataSize=594 B heapSize=1.05 KB 2023-07-12 05:17:36,444 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 05:17:36,445 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 05:17:36,445 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1478): Online Regions={4016c8687e7bf933eac79b515aa2bea5=hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5., 481b0d08dd1f52cccd48f640e8568d34=hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34., 1588230740=hbase:meta,,1.1588230740, 8d7db99eb801bd15082adcf4cabefa8f=hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f.} 2023-07-12 05:17:36,445 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1504): Waiting on 1588230740, 4016c8687e7bf933eac79b515aa2bea5, 481b0d08dd1f52cccd48f640e8568d34, 8d7db99eb801bd15082adcf4cabefa8f 2023-07-12 05:17:36,447 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 05:17:36,447 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 05:17:36,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 05:17:36,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 05:17:36,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 05:17:36,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.90 KB heapSize=11.10 KB 2023-07-12 05:17:36,455 DEBUG [RS:1;jenkins-hbase20:45183] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs 2023-07-12 05:17:36,455 INFO [RS:1;jenkins-hbase20:45183] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C45183%2C1689139052710:(num 1689139053627) 2023-07-12 05:17:36,455 DEBUG [RS:1;jenkins-hbase20:45183] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,455 INFO [RS:1;jenkins-hbase20:45183] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:36,455 INFO [RS:1;jenkins-hbase20:45183] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:36,455 INFO [RS:1;jenkins-hbase20:45183] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:36,455 INFO [RS:1;jenkins-hbase20:45183] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:36,455 INFO [RS:1;jenkins-hbase20:45183] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:36,456 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:36,456 INFO [RS:1;jenkins-hbase20:45183] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45183 2023-07-12 05:17:36,462 DEBUG [RS:0;jenkins-hbase20:45775] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs 2023-07-12 05:17:36,462 INFO [RS:0;jenkins-hbase20:45775] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C45775%2C1689139052544:(num 1689139053619) 2023-07-12 05:17:36,462 DEBUG [RS:0;jenkins-hbase20:45775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,462 INFO [RS:0;jenkins-hbase20:45775] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:36,463 INFO [RS:0;jenkins-hbase20:45775] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:36,463 INFO [RS:0;jenkins-hbase20:45775] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:36,463 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:36,463 INFO [RS:0;jenkins-hbase20:45775] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:36,463 INFO [RS:0;jenkins-hbase20:45775] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:36,464 INFO [RS:0;jenkins-hbase20:45775] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45775 2023-07-12 05:17:36,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=594 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/.tmp/m/96a5edda8b724a639a5ad66329751576 2023-07-12 05:17:36,480 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:36,484 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.27 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/.tmp/info/edd9468c00ec48c6a85d01cdda31d917 2023-07-12 05:17:36,488 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:36,488 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:36,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/.tmp/m/96a5edda8b724a639a5ad66329751576 as hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/m/96a5edda8b724a639a5ad66329751576 2023-07-12 05:17:36,493 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for edd9468c00ec48c6a85d01cdda31d917 2023-07-12 05:17:36,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/m/96a5edda8b724a639a5ad66329751576, entries=1, sequenceid=7, filesize=4.9 K 2023-07-12 05:17:36,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~594 B/594, heapSize ~1.04 KB/1064, currentSize=0 B/0 for 4016c8687e7bf933eac79b515aa2bea5 in 61ms, sequenceid=7, compaction requested=false 2023-07-12 05:17:36,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 05:17:36,509 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 05:17:36,509 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 05:17:36,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/.tmp/rep_barrier/ebcf959c51fe4f1db8d13793a36d4e4b 2023-07-12 05:17:36,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/rsgroup/4016c8687e7bf933eac79b515aa2bea5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-12 05:17:36,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:36,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:36,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 4016c8687e7bf933eac79b515aa2bea5: 2023-07-12 05:17:36,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689139053930.4016c8687e7bf933eac79b515aa2bea5. 2023-07-12 05:17:36,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 481b0d08dd1f52cccd48f640e8568d34, disabling compactions & flushes 2023-07-12 05:17:36,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:36,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:36,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. after waiting 0 ms 2023-07-12 05:17:36,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:36,529 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ebcf959c51fe4f1db8d13793a36d4e4b 2023-07-12 05:17:36,531 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:36,532 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:36,532 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:36,532 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:36,533 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:36,532 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45183,1689139052710 2023-07-12 05:17:36,533 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:36,533 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:36,533 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45775,1689139052544] 2023-07-12 05:17:36,533 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45775,1689139052544; numProcessing=1 2023-07-12 05:17:36,537 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:36,537 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45775,1689139052544 2023-07-12 05:17:36,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/quota/481b0d08dd1f52cccd48f640e8568d34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:36,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:36,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 481b0d08dd1f52cccd48f640e8568d34: 2023-07-12 05:17:36,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689139054319.481b0d08dd1f52cccd48f640e8568d34. 2023-07-12 05:17:36,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8d7db99eb801bd15082adcf4cabefa8f, disabling compactions & flushes 2023-07-12 05:17:36,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:36,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:36,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. after waiting 0 ms 2023-07-12 05:17:36,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:36,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 8d7db99eb801bd15082adcf4cabefa8f 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 05:17:36,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/.tmp/table/b7158221188f467bacf8af11af8be394 2023-07-12 05:17:36,572 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b7158221188f467bacf8af11af8be394 2023-07-12 05:17:36,574 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/.tmp/info/edd9468c00ec48c6a85d01cdda31d917 as hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/info/edd9468c00ec48c6a85d01cdda31d917 2023-07-12 05:17:36,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/.tmp/info/fe20e155999643b986053b591c0451ae 2023-07-12 05:17:36,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for edd9468c00ec48c6a85d01cdda31d917 2023-07-12 05:17:36,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/info/edd9468c00ec48c6a85d01cdda31d917, entries=32, sequenceid=31, filesize=8.5 K 2023-07-12 05:17:36,583 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/.tmp/rep_barrier/ebcf959c51fe4f1db8d13793a36d4e4b as hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/rep_barrier/ebcf959c51fe4f1db8d13793a36d4e4b 2023-07-12 05:17:36,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe20e155999643b986053b591c0451ae 2023-07-12 05:17:36,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/.tmp/info/fe20e155999643b986053b591c0451ae as hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/info/fe20e155999643b986053b591c0451ae 2023-07-12 05:17:36,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ebcf959c51fe4f1db8d13793a36d4e4b 2023-07-12 05:17:36,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/rep_barrier/ebcf959c51fe4f1db8d13793a36d4e4b, entries=1, sequenceid=31, filesize=4.9 K 2023-07-12 05:17:36,598 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/.tmp/table/b7158221188f467bacf8af11af8be394 as hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/table/b7158221188f467bacf8af11af8be394 2023-07-12 05:17:36,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe20e155999643b986053b591c0451ae 2023-07-12 05:17:36,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/info/fe20e155999643b986053b591c0451ae, entries=3, sequenceid=8, filesize=5.0 K 2023-07-12 05:17:36,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 8d7db99eb801bd15082adcf4cabefa8f in 62ms, sequenceid=8, compaction requested=false 2023-07-12 05:17:36,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 05:17:36,621 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b7158221188f467bacf8af11af8be394 2023-07-12 05:17:36,621 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/table/b7158221188f467bacf8af11af8be394, entries=8, sequenceid=31, filesize=5.2 K 2023-07-12 05:17:36,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.90 KB/6045, heapSize ~11.05 KB/11320, currentSize=0 B/0 for 1588230740 in 173ms, sequenceid=31, compaction requested=false 2023-07-12 05:17:36,622 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 05:17:36,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/namespace/8d7db99eb801bd15082adcf4cabefa8f/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-12 05:17:36,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:36,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8d7db99eb801bd15082adcf4cabefa8f: 2023-07-12 05:17:36,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689139053851.8d7db99eb801bd15082adcf4cabefa8f. 2023-07-12 05:17:36,635 INFO [RS:1;jenkins-hbase20:45183] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45183,1689139052710; zookeeper connection closed. 2023-07-12 05:17:36,637 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c047f69] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c047f69 2023-07-12 05:17:36,638 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:36,638 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45183-0x1007f9cfb890002, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:36,642 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45775,1689139052544 already deleted, retry=false 2023-07-12 05:17:36,642 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45775,1689139052544 expired; onlineServers=2 2023-07-12 05:17:36,642 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45183,1689139052710] 2023-07-12 05:17:36,642 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45183,1689139052710; numProcessing=2 2023-07-12 05:17:36,643 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45183,1689139052710 already deleted, retry=false 2023-07-12 05:17:36,643 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45183,1689139052710 expired; onlineServers=1 2023-07-12 05:17:36,645 DEBUG [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 05:17:36,646 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-12 05:17:36,647 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:36,648 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:36,648 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 05:17:36,648 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:36,734 INFO [RS:0;jenkins-hbase20:45775] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45775,1689139052544; zookeeper connection closed. 2023-07-12 05:17:36,734 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:36,734 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45775-0x1007f9cfb890001, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:36,735 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56ad96a5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56ad96a5 2023-07-12 05:17:36,845 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45255,1689139052879; all regions closed. 2023-07-12 05:17:36,846 DEBUG [RS:2;jenkins-hbase20:45255] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 05:17:36,873 DEBUG [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs 2023-07-12 05:17:36,873 INFO [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C45255%2C1689139052879.meta:.meta(num 1689139053774) 2023-07-12 05:17:36,888 DEBUG [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/oldWALs 2023-07-12 05:17:36,888 INFO [RS:2;jenkins-hbase20:45255] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C45255%2C1689139052879:(num 1689139053610) 2023-07-12 05:17:36,888 DEBUG [RS:2;jenkins-hbase20:45255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,889 INFO [RS:2;jenkins-hbase20:45255] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:36,889 INFO [RS:2;jenkins-hbase20:45255] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:36,889 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:36,890 INFO [RS:2;jenkins-hbase20:45255] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45255 2023-07-12 05:17:36,895 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:36,895 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45255,1689139052879 2023-07-12 05:17:36,896 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45255,1689139052879] 2023-07-12 05:17:36,896 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45255,1689139052879; numProcessing=3 2023-07-12 05:17:36,896 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45255,1689139052879 already deleted, retry=false 2023-07-12 05:17:36,896 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45255,1689139052879 expired; onlineServers=0 2023-07-12 05:17:36,896 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44483,1689139052348' ***** 2023-07-12 05:17:36,896 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 05:17:36,898 DEBUG [M:0;jenkins-hbase20:44483] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51af0ae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:36,898 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:36,902 INFO [M:0;jenkins-hbase20:44483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4941a526{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 05:17:36,902 INFO [M:0;jenkins-hbase20:44483] server.AbstractConnector(383): Stopped ServerConnector@7c97e66a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:36,903 INFO [M:0;jenkins-hbase20:44483] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:36,903 INFO [M:0;jenkins-hbase20:44483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d3bf816{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:36,903 INFO [M:0;jenkins-hbase20:44483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5837a87a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:36,906 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:36,906 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:36,906 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44483,1689139052348 2023-07-12 05:17:36,906 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44483,1689139052348; all regions closed. 2023-07-12 05:17:36,907 DEBUG [M:0;jenkins-hbase20:44483] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:36,907 INFO [M:0;jenkins-hbase20:44483] master.HMaster(1491): Stopping master jetty server 2023-07-12 05:17:36,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:36,907 INFO [M:0;jenkins-hbase20:44483] server.AbstractConnector(383): Stopped ServerConnector@46f3c937{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:36,908 DEBUG [M:0;jenkins-hbase20:44483] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 05:17:36,908 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 05:17:36,908 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139053323] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139053323,5,FailOnTimeoutGroup] 2023-07-12 05:17:36,908 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139053323] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139053323,5,FailOnTimeoutGroup] 2023-07-12 05:17:36,908 DEBUG [M:0;jenkins-hbase20:44483] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 05:17:36,909 INFO [M:0;jenkins-hbase20:44483] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 05:17:36,909 INFO [M:0;jenkins-hbase20:44483] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 05:17:36,909 INFO [M:0;jenkins-hbase20:44483] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:36,909 DEBUG [M:0;jenkins-hbase20:44483] master.HMaster(1512): Stopping service threads 2023-07-12 05:17:36,910 INFO [M:0;jenkins-hbase20:44483] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 05:17:36,910 ERROR [M:0;jenkins-hbase20:44483] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 05:17:36,910 INFO [M:0;jenkins-hbase20:44483] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 05:17:36,910 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 05:17:36,915 DEBUG [M:0;jenkins-hbase20:44483] zookeeper.ZKUtil(398): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 05:17:36,915 WARN [M:0;jenkins-hbase20:44483] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 05:17:36,915 INFO [M:0;jenkins-hbase20:44483] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 05:17:36,916 INFO [M:0;jenkins-hbase20:44483] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 05:17:36,916 DEBUG [M:0;jenkins-hbase20:44483] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 05:17:36,916 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:36,916 DEBUG [M:0;jenkins-hbase20:44483] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:36,916 DEBUG [M:0;jenkins-hbase20:44483] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 05:17:36,916 DEBUG [M:0;jenkins-hbase20:44483] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:36,916 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.07 KB heapSize=109.23 KB 2023-07-12 05:17:36,951 INFO [M:0;jenkins-hbase20:44483] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.07 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e5dc61ee52714d06b3e5e3c8e312b005 2023-07-12 05:17:36,959 DEBUG [M:0;jenkins-hbase20:44483] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e5dc61ee52714d06b3e5e3c8e312b005 as hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e5dc61ee52714d06b3e5e3c8e312b005 2023-07-12 05:17:36,970 INFO [M:0;jenkins-hbase20:44483] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36357/user/jenkins/test-data/471713d4-5f97-b739-f1cb-bc22bbd55bca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e5dc61ee52714d06b3e5e3c8e312b005, entries=24, sequenceid=194, filesize=12.4 K 2023-07-12 05:17:36,971 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegion(2948): Finished flush of dataSize ~93.07 KB/95307, heapSize ~109.21 KB/111832, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 55ms, sequenceid=194, compaction requested=false 2023-07-12 05:17:36,982 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:36,982 DEBUG [M:0;jenkins-hbase20:44483] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:36,992 INFO [M:0;jenkins-hbase20:44483] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 05:17:36,992 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:36,993 INFO [M:0;jenkins-hbase20:44483] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44483 2023-07-12 05:17:36,994 DEBUG [M:0;jenkins-hbase20:44483] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44483,1689139052348 already deleted, retry=false 2023-07-12 05:17:37,035 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:37,035 INFO [RS:2;jenkins-hbase20:45255] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45255,1689139052879; zookeeper connection closed. 2023-07-12 05:17:37,035 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): regionserver:45255-0x1007f9cfb890003, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:37,036 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5855b372] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5855b372 2023-07-12 05:17:37,036 INFO [Listener at localhost.localdomain/42409] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 05:17:37,135 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:37,135 INFO [M:0;jenkins-hbase20:44483] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44483,1689139052348; zookeeper connection closed. 2023-07-12 05:17:37,135 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): master:44483-0x1007f9cfb890000, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:37,136 WARN [Listener at localhost.localdomain/42409] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:37,140 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:37,244 WARN [BP-147418204-148.251.75.209-1689139051382 heartbeating to localhost.localdomain/127.0.0.1:36357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:37,244 WARN [BP-147418204-148.251.75.209-1689139051382 heartbeating to localhost.localdomain/127.0.0.1:36357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-147418204-148.251.75.209-1689139051382 (Datanode Uuid 0444b1ce-b363-4678-b21f-05de9d58a996) service to localhost.localdomain/127.0.0.1:36357 2023-07-12 05:17:37,244 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/dfs/data/data5/current/BP-147418204-148.251.75.209-1689139051382] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:37,245 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/dfs/data/data6/current/BP-147418204-148.251.75.209-1689139051382] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:37,247 WARN [Listener at localhost.localdomain/42409] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:37,252 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:37,356 WARN [BP-147418204-148.251.75.209-1689139051382 heartbeating to localhost.localdomain/127.0.0.1:36357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:37,356 WARN [BP-147418204-148.251.75.209-1689139051382 heartbeating to localhost.localdomain/127.0.0.1:36357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-147418204-148.251.75.209-1689139051382 (Datanode Uuid a4b3e39d-c69c-40f2-a5cb-1a5e79ffe697) service to localhost.localdomain/127.0.0.1:36357 2023-07-12 05:17:37,357 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/dfs/data/data3/current/BP-147418204-148.251.75.209-1689139051382] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:37,358 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/dfs/data/data4/current/BP-147418204-148.251.75.209-1689139051382] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:37,366 WARN [Listener at localhost.localdomain/42409] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:37,371 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:37,476 WARN [BP-147418204-148.251.75.209-1689139051382 heartbeating to localhost.localdomain/127.0.0.1:36357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:37,476 WARN [BP-147418204-148.251.75.209-1689139051382 heartbeating to localhost.localdomain/127.0.0.1:36357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-147418204-148.251.75.209-1689139051382 (Datanode Uuid 4f625e4c-df9b-4fad-aee7-0e30eb33bc52) service to localhost.localdomain/127.0.0.1:36357 2023-07-12 05:17:37,476 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/dfs/data/data1/current/BP-147418204-148.251.75.209-1689139051382] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:37,477 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/cluster_146d007d-cfb5-0efe-6485-9e5d109ca339/dfs/data/data2/current/BP-147418204-148.251.75.209-1689139051382] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:37,485 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-12 05:17:37,597 INFO [Listener at localhost.localdomain/42409] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 05:17:37,624 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 05:17:37,624 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.log.dir so I do NOT create it in target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f3ef7c6a-14d7-472d-bfae-cf132a4f3160/hadoop.tmp.dir so I do NOT create it in target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f, deleteOnExit=true 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/test.cache.data in system properties and HBase conf 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir in system properties and HBase conf 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 05:17:37,625 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 05:17:37,625 DEBUG [Listener at localhost.localdomain/42409] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 05:17:37,626 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 05:17:37,627 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 05:17:37,627 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/nfs.dump.dir in system properties and HBase conf 2023-07-12 05:17:37,627 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir in system properties and HBase conf 2023-07-12 05:17:37,627 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 05:17:37,627 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 05:17:37,627 INFO [Listener at localhost.localdomain/42409] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 05:17:37,629 WARN [Listener at localhost.localdomain/42409] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 05:17:37,629 WARN [Listener at localhost.localdomain/42409] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 05:17:37,653 WARN [Listener at localhost.localdomain/42409] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:37,655 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:37,660 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/Jetty_localhost_localdomain_37615_hdfs____j20ic0/webapp 2023-07-12 05:17:37,696 DEBUG [Listener at localhost.localdomain/42409-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1007f9cfb89000a, quorum=127.0.0.1:63349, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 05:17:37,696 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1007f9cfb89000a, quorum=127.0.0.1:63349, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 05:17:37,734 INFO [Listener at localhost.localdomain/42409] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37615 2023-07-12 05:17:37,737 WARN [Listener at localhost.localdomain/42409] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 05:17:37,737 WARN [Listener at localhost.localdomain/42409] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 05:17:37,765 WARN [Listener at localhost.localdomain/41411] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:37,783 WARN [Listener at localhost.localdomain/41411] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:37,785 WARN [Listener at localhost.localdomain/41411] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:37,786 INFO [Listener at localhost.localdomain/41411] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:37,790 INFO [Listener at localhost.localdomain/41411] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/Jetty_localhost_40495_datanode____dcv48r/webapp 2023-07-12 05:17:37,864 INFO [Listener at localhost.localdomain/41411] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40495 2023-07-12 05:17:37,871 WARN [Listener at localhost.localdomain/36447] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:37,886 WARN [Listener at localhost.localdomain/36447] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:37,889 WARN [Listener at localhost.localdomain/36447] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:37,890 INFO [Listener at localhost.localdomain/36447] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:37,894 INFO [Listener at localhost.localdomain/36447] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/Jetty_localhost_46767_datanode____.i5pig3/webapp 2023-07-12 05:17:37,952 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1931c6365d79328e: Processing first storage report for DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42 from datanode 28680443-50bc-447e-9ebf-acd623957c7b 2023-07-12 05:17:37,952 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1931c6365d79328e: from storage DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42 node DatanodeRegistration(127.0.0.1:40341, datanodeUuid=28680443-50bc-447e-9ebf-acd623957c7b, infoPort=34655, infoSecurePort=0, ipcPort=36447, storageInfo=lv=-57;cid=testClusterID;nsid=890871068;c=1689139057631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:37,952 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1931c6365d79328e: Processing first storage report for DS-8dc79fae-ba3a-4511-af2c-443fa3d7df2e from datanode 28680443-50bc-447e-9ebf-acd623957c7b 2023-07-12 05:17:37,953 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1931c6365d79328e: from storage DS-8dc79fae-ba3a-4511-af2c-443fa3d7df2e node DatanodeRegistration(127.0.0.1:40341, datanodeUuid=28680443-50bc-447e-9ebf-acd623957c7b, infoPort=34655, infoSecurePort=0, ipcPort=36447, storageInfo=lv=-57;cid=testClusterID;nsid=890871068;c=1689139057631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:37,996 INFO [Listener at localhost.localdomain/36447] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46767 2023-07-12 05:17:38,003 WARN [Listener at localhost.localdomain/42427] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:38,014 WARN [Listener at localhost.localdomain/42427] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 05:17:38,017 WARN [Listener at localhost.localdomain/42427] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 05:17:38,018 INFO [Listener at localhost.localdomain/42427] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 05:17:38,022 INFO [Listener at localhost.localdomain/42427] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/Jetty_localhost_42177_datanode____.9zls6/webapp 2023-07-12 05:17:38,114 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e5d49e13e137f37: Processing first storage report for DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006 from datanode fe61c401-8b58-4495-ae04-8e2104f8c356 2023-07-12 05:17:38,114 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e5d49e13e137f37: from storage DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006 node DatanodeRegistration(127.0.0.1:34113, datanodeUuid=fe61c401-8b58-4495-ae04-8e2104f8c356, infoPort=44235, infoSecurePort=0, ipcPort=42427, storageInfo=lv=-57;cid=testClusterID;nsid=890871068;c=1689139057631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:38,114 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5e5d49e13e137f37: Processing first storage report for DS-a693e889-2a91-476c-9f68-19ae26bef1b6 from datanode fe61c401-8b58-4495-ae04-8e2104f8c356 2023-07-12 05:17:38,114 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5e5d49e13e137f37: from storage DS-a693e889-2a91-476c-9f68-19ae26bef1b6 node DatanodeRegistration(127.0.0.1:34113, datanodeUuid=fe61c401-8b58-4495-ae04-8e2104f8c356, infoPort=44235, infoSecurePort=0, ipcPort=42427, storageInfo=lv=-57;cid=testClusterID;nsid=890871068;c=1689139057631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:38,119 INFO [Listener at localhost.localdomain/42427] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42177 2023-07-12 05:17:38,128 WARN [Listener at localhost.localdomain/37977] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 05:17:38,196 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcaa6f6cf50cf6a96: Processing first storage report for DS-0b7ae179-f274-45be-ba4f-2fcfbff52803 from datanode 80f590bb-fdd1-4e5a-a664-741d79fe48c6 2023-07-12 05:17:38,196 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcaa6f6cf50cf6a96: from storage DS-0b7ae179-f274-45be-ba4f-2fcfbff52803 node DatanodeRegistration(127.0.0.1:39137, datanodeUuid=80f590bb-fdd1-4e5a-a664-741d79fe48c6, infoPort=45915, infoSecurePort=0, ipcPort=37977, storageInfo=lv=-57;cid=testClusterID;nsid=890871068;c=1689139057631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:38,196 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcaa6f6cf50cf6a96: Processing first storage report for DS-6b334479-7876-4712-82ea-15eacacd4eb6 from datanode 80f590bb-fdd1-4e5a-a664-741d79fe48c6 2023-07-12 05:17:38,197 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcaa6f6cf50cf6a96: from storage DS-6b334479-7876-4712-82ea-15eacacd4eb6 node DatanodeRegistration(127.0.0.1:39137, datanodeUuid=80f590bb-fdd1-4e5a-a664-741d79fe48c6, infoPort=45915, infoSecurePort=0, ipcPort=37977, storageInfo=lv=-57;cid=testClusterID;nsid=890871068;c=1689139057631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 05:17:38,240 DEBUG [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff 2023-07-12 05:17:38,243 INFO [Listener at localhost.localdomain/37977] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/zookeeper_0, clientPort=55884, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 05:17:38,244 INFO [Listener at localhost.localdomain/37977] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55884 2023-07-12 05:17:38,244 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,245 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,272 INFO [Listener at localhost.localdomain/37977] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad with version=8 2023-07-12 05:17:38,272 INFO [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:35039/user/jenkins/test-data/f872079d-6ccd-67ab-52c0-9f1647a9be6e/hbase-staging 2023-07-12 05:17:38,273 DEBUG [Listener at localhost.localdomain/37977] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 05:17:38,273 DEBUG [Listener at localhost.localdomain/37977] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 05:17:38,273 DEBUG [Listener at localhost.localdomain/37977] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 05:17:38,273 DEBUG [Listener at localhost.localdomain/37977] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:38,274 INFO [Listener at localhost.localdomain/37977] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:38,276 INFO [Listener at localhost.localdomain/37977] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46251 2023-07-12 05:17:38,276 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,278 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,279 INFO [Listener at localhost.localdomain/37977] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46251 connecting to ZooKeeper ensemble=127.0.0.1:55884 2023-07-12 05:17:38,284 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:462510x0, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:38,286 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46251-0x1007f9d12c00000 connected 2023-07-12 05:17:38,296 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:38,296 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:38,297 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:38,297 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46251 2023-07-12 05:17:38,297 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46251 2023-07-12 05:17:38,298 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46251 2023-07-12 05:17:38,298 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46251 2023-07-12 05:17:38,298 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46251 2023-07-12 05:17:38,300 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:38,300 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:38,300 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:38,301 INFO [Listener at localhost.localdomain/37977] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 05:17:38,301 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:38,301 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:38,301 INFO [Listener at localhost.localdomain/37977] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:38,301 INFO [Listener at localhost.localdomain/37977] http.HttpServer(1146): Jetty bound to port 46253 2023-07-12 05:17:38,302 INFO [Listener at localhost.localdomain/37977] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:38,305 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,305 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@216fc6c1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:38,305 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,306 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d75332a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:38,399 INFO [Listener at localhost.localdomain/37977] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:38,400 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:38,401 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:38,401 INFO [Listener at localhost.localdomain/37977] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:38,402 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,404 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@631c9e79{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/jetty-0_0_0_0-46253-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5351294535265483103/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 05:17:38,405 INFO [Listener at localhost.localdomain/37977] server.AbstractConnector(333): Started ServerConnector@11bf68c0{HTTP/1.1, (http/1.1)}{0.0.0.0:46253} 2023-07-12 05:17:38,405 INFO [Listener at localhost.localdomain/37977] server.Server(415): Started @42335ms 2023-07-12 05:17:38,406 INFO [Listener at localhost.localdomain/37977] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad, hbase.cluster.distributed=false 2023-07-12 05:17:38,420 INFO [Listener at localhost.localdomain/37977] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:38,421 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,421 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,421 INFO [Listener at localhost.localdomain/37977] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:38,421 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,421 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:38,421 INFO [Listener at localhost.localdomain/37977] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:38,422 INFO [Listener at localhost.localdomain/37977] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36541 2023-07-12 05:17:38,423 INFO [Listener at localhost.localdomain/37977] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:38,424 DEBUG [Listener at localhost.localdomain/37977] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:38,424 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,425 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,426 INFO [Listener at localhost.localdomain/37977] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36541 connecting to ZooKeeper ensemble=127.0.0.1:55884 2023-07-12 05:17:38,429 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:365410x0, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:38,430 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:365410x0, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:38,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36541-0x1007f9d12c00001 connected 2023-07-12 05:17:38,431 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:38,432 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:38,433 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36541 2023-07-12 05:17:38,433 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36541 2023-07-12 05:17:38,433 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36541 2023-07-12 05:17:38,433 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36541 2023-07-12 05:17:38,434 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36541 2023-07-12 05:17:38,435 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:38,436 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:38,436 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:38,436 INFO [Listener at localhost.localdomain/37977] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:38,436 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:38,436 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:38,436 INFO [Listener at localhost.localdomain/37977] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:38,437 INFO [Listener at localhost.localdomain/37977] http.HttpServer(1146): Jetty bound to port 46539 2023-07-12 05:17:38,437 INFO [Listener at localhost.localdomain/37977] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:38,438 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,439 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5506ba92{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:38,439 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,439 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a833aa2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:38,537 INFO [Listener at localhost.localdomain/37977] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:38,538 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:38,538 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:38,538 INFO [Listener at localhost.localdomain/37977] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 05:17:38,539 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,540 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5cb25272{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/jetty-0_0_0_0-46539-hbase-server-2_4_18-SNAPSHOT_jar-_-any-361491770667922111/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:38,541 INFO [Listener at localhost.localdomain/37977] server.AbstractConnector(333): Started ServerConnector@470cc644{HTTP/1.1, (http/1.1)}{0.0.0.0:46539} 2023-07-12 05:17:38,542 INFO [Listener at localhost.localdomain/37977] server.Server(415): Started @42472ms 2023-07-12 05:17:38,554 INFO [Listener at localhost.localdomain/37977] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:38,554 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,555 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,555 INFO [Listener at localhost.localdomain/37977] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:38,555 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,555 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:38,555 INFO [Listener at localhost.localdomain/37977] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:38,557 INFO [Listener at localhost.localdomain/37977] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38957 2023-07-12 05:17:38,557 INFO [Listener at localhost.localdomain/37977] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:38,558 DEBUG [Listener at localhost.localdomain/37977] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:38,558 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,559 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,560 INFO [Listener at localhost.localdomain/37977] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38957 connecting to ZooKeeper ensemble=127.0.0.1:55884 2023-07-12 05:17:38,563 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:389570x0, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:38,565 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:389570x0, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:38,565 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38957-0x1007f9d12c00002 connected 2023-07-12 05:17:38,566 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:38,567 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:38,568 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38957 2023-07-12 05:17:38,568 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38957 2023-07-12 05:17:38,568 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38957 2023-07-12 05:17:38,569 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38957 2023-07-12 05:17:38,569 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38957 2023-07-12 05:17:38,571 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:38,571 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:38,571 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:38,571 INFO [Listener at localhost.localdomain/37977] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:38,571 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:38,572 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:38,572 INFO [Listener at localhost.localdomain/37977] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:38,572 INFO [Listener at localhost.localdomain/37977] http.HttpServer(1146): Jetty bound to port 41211 2023-07-12 05:17:38,572 INFO [Listener at localhost.localdomain/37977] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:38,575 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,575 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3acab148{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:38,575 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,576 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6b2bbe25{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:38,688 INFO [Listener at localhost.localdomain/37977] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:38,689 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:38,689 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:38,689 INFO [Listener at localhost.localdomain/37977] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:38,690 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,690 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4faab4a8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/jetty-0_0_0_0-41211-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8580037036968278584/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:38,693 INFO [Listener at localhost.localdomain/37977] server.AbstractConnector(333): Started ServerConnector@693c409e{HTTP/1.1, (http/1.1)}{0.0.0.0:41211} 2023-07-12 05:17:38,694 INFO [Listener at localhost.localdomain/37977] server.Server(415): Started @42624ms 2023-07-12 05:17:38,706 INFO [Listener at localhost.localdomain/37977] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:38,706 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,706 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,706 INFO [Listener at localhost.localdomain/37977] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:38,707 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:38,707 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:38,707 INFO [Listener at localhost.localdomain/37977] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:38,708 INFO [Listener at localhost.localdomain/37977] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43053 2023-07-12 05:17:38,709 INFO [Listener at localhost.localdomain/37977] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:38,718 DEBUG [Listener at localhost.localdomain/37977] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:38,719 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,720 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,722 INFO [Listener at localhost.localdomain/37977] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43053 connecting to ZooKeeper ensemble=127.0.0.1:55884 2023-07-12 05:17:38,735 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:430530x0, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:38,737 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:430530x0, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:38,738 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43053-0x1007f9d12c00003 connected 2023-07-12 05:17:38,738 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:38,739 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:38,740 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43053 2023-07-12 05:17:38,740 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43053 2023-07-12 05:17:38,740 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43053 2023-07-12 05:17:38,743 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43053 2023-07-12 05:17:38,743 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43053 2023-07-12 05:17:38,745 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:38,745 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:38,745 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:38,746 INFO [Listener at localhost.localdomain/37977] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:38,746 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:38,746 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:38,746 INFO [Listener at localhost.localdomain/37977] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:38,747 INFO [Listener at localhost.localdomain/37977] http.HttpServer(1146): Jetty bound to port 42167 2023-07-12 05:17:38,747 INFO [Listener at localhost.localdomain/37977] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:38,748 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,749 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c7d608e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:38,749 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,749 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@27ea5a91{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:38,852 INFO [Listener at localhost.localdomain/37977] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:38,853 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:38,853 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:38,854 INFO [Listener at localhost.localdomain/37977] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:38,854 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:38,855 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@165bed6e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/jetty-0_0_0_0-42167-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4056492194418446744/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:38,857 INFO [Listener at localhost.localdomain/37977] server.AbstractConnector(333): Started ServerConnector@46715e66{HTTP/1.1, (http/1.1)}{0.0.0.0:42167} 2023-07-12 05:17:38,857 INFO [Listener at localhost.localdomain/37977] server.Server(415): Started @42787ms 2023-07-12 05:17:38,860 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:38,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@a9d7a9a{HTTP/1.1, (http/1.1)}{0.0.0.0:39815} 2023-07-12 05:17:38,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @42795ms 2023-07-12 05:17:38,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:38,866 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 05:17:38,866 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:38,867 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:38,867 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:38,867 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:38,867 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:38,869 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:38,869 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:38,871 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:38,874 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,46251,1689139058273 from backup master directory 2023-07-12 05:17:38,882 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:38,882 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 05:17:38,882 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:38,882 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:38,907 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/hbase.id with ID: 0797ad40-1c45-4915-bd65-43eff252aff1 2023-07-12 05:17:38,928 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:38,930 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:38,955 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0ea2ce13 to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:38,958 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a8d97c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:38,958 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:38,959 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 05:17:38,959 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:38,960 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store-tmp 2023-07-12 05:17:38,969 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:38,969 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 05:17:38,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:38,969 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:38,969 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 05:17:38,969 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:38,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:38,969 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:38,970 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/WALs/jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:38,972 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46251%2C1689139058273, suffix=, logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/WALs/jenkins-hbase20.apache.org,46251,1689139058273, archiveDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/oldWALs, maxLogs=10 2023-07-12 05:17:38,993 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK] 2023-07-12 05:17:38,994 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK] 2023-07-12 05:17:38,994 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK] 2023-07-12 05:17:38,999 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/WALs/jenkins-hbase20.apache.org,46251,1689139058273/jenkins-hbase20.apache.org%2C46251%2C1689139058273.1689139058972 2023-07-12 05:17:38,999 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK], DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK], DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK]] 2023-07-12 05:17:38,999 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:39,000 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:39,000 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:39,000 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:39,001 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:39,002 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 05:17:39,003 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 05:17:39,003 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,004 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:39,004 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:39,006 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 05:17:39,008 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:39,009 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10632926400, jitterRate=-0.009731560945510864}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:39,009 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:39,009 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 05:17:39,011 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 05:17:39,011 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 05:17:39,011 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 05:17:39,012 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 05:17:39,012 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 05:17:39,012 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 05:17:39,013 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 05:17:39,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 05:17:39,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 05:17:39,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 05:17:39,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 05:17:39,016 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:39,017 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 05:17:39,017 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 05:17:39,018 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 05:17:39,019 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:39,019 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:39,019 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:39,019 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:39,019 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:39,019 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,46251,1689139058273, sessionid=0x1007f9d12c00000, setting cluster-up flag (Was=false) 2023-07-12 05:17:39,023 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:39,025 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 05:17:39,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:39,028 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:39,030 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 05:17:39,031 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:39,032 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.hbase-snapshot/.tmp 2023-07-12 05:17:39,032 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 05:17:39,033 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 05:17:39,034 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:39,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 05:17:39,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 05:17:39,035 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:39,048 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 05:17:39,048 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 05:17:39,048 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 05:17:39,048 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 05:17:39,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:39,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,052 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689139089052 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 05:17:39,053 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 05:17:39,053 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 05:17:39,053 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 05:17:39,055 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139059055,5,FailOnTimeoutGroup] 2023-07-12 05:17:39,055 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139059055,5,FailOnTimeoutGroup] 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,055 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:39,055 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,059 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(951): ClusterId : 0797ad40-1c45-4915-bd65-43eff252aff1 2023-07-12 05:17:39,059 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:39,060 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(951): ClusterId : 0797ad40-1c45-4915-bd65-43eff252aff1 2023-07-12 05:17:39,060 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:39,060 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:39,060 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:39,062 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:39,062 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:39,062 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:39,064 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:39,071 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(951): ClusterId : 0797ad40-1c45-4915-bd65-43eff252aff1 2023-07-12 05:17:39,071 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:39,079 DEBUG [RS:1;jenkins-hbase20:38957] zookeeper.ReadOnlyZKClient(139): Connect 0x59f81d7e to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:39,080 DEBUG [RS:0;jenkins-hbase20:36541] zookeeper.ReadOnlyZKClient(139): Connect 0x25f1539f to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:39,085 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:39,085 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:39,088 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:39,093 DEBUG [RS:2;jenkins-hbase20:43053] zookeeper.ReadOnlyZKClient(139): Connect 0x6d1a89af to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:39,100 DEBUG [RS:0;jenkins-hbase20:36541] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12991ab6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:39,100 DEBUG [RS:0;jenkins-hbase20:36541] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@141e7038, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:39,104 DEBUG [RS:1;jenkins-hbase20:38957] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@573ff831, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:39,104 DEBUG [RS:1;jenkins-hbase20:38957] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5300760e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:39,110 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36541 2023-07-12 05:17:39,110 INFO [RS:0;jenkins-hbase20:36541] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:39,110 INFO [RS:0;jenkins-hbase20:36541] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:39,110 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:39,118 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:38957 2023-07-12 05:17:39,114 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:39,112 DEBUG [RS:2;jenkins-hbase20:43053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63dd6a7c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:39,118 INFO [RS:1;jenkins-hbase20:38957] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:39,118 INFO [RS:1;jenkins-hbase20:38957] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:39,118 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:39,118 DEBUG [RS:2;jenkins-hbase20:43053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67e2e2df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:39,118 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,46251,1689139058273 with isa=jenkins-hbase20.apache.org/148.251.75.209:38957, startcode=1689139058554 2023-07-12 05:17:39,119 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,46251,1689139058273 with isa=jenkins-hbase20.apache.org/148.251.75.209:36541, startcode=1689139058420 2023-07-12 05:17:39,119 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:39,119 DEBUG [RS:1;jenkins-hbase20:38957] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:39,119 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad 2023-07-12 05:17:39,119 DEBUG [RS:0;jenkins-hbase20:36541] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:39,122 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44519, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:39,122 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56545, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:39,128 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46251] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,128 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:39,129 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 05:17:39,129 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46251] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,130 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:39,130 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 05:17:39,130 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad 2023-07-12 05:17:39,130 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad 2023-07-12 05:17:39,130 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41411 2023-07-12 05:17:39,130 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41411 2023-07-12 05:17:39,130 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46253 2023-07-12 05:17:39,130 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46253 2023-07-12 05:17:39,131 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:43053 2023-07-12 05:17:39,131 INFO [RS:2;jenkins-hbase20:43053] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:39,131 INFO [RS:2;jenkins-hbase20:43053] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:39,131 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:39,133 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:39,133 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,46251,1689139058273 with isa=jenkins-hbase20.apache.org/148.251.75.209:43053, startcode=1689139058705 2023-07-12 05:17:39,134 DEBUG [RS:2;jenkins-hbase20:43053] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:39,134 DEBUG [RS:0;jenkins-hbase20:36541] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,134 WARN [RS:0;jenkins-hbase20:36541] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:39,134 INFO [RS:0;jenkins-hbase20:36541] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:39,134 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,135 DEBUG [RS:1;jenkins-hbase20:38957] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,135 WARN [RS:1;jenkins-hbase20:38957] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:39,135 INFO [RS:1;jenkins-hbase20:38957] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:39,135 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,136 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36541,1689139058420] 2023-07-12 05:17:39,136 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38957,1689139058554] 2023-07-12 05:17:39,139 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36003, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:39,141 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46251] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,144 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:39,144 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 05:17:39,144 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad 2023-07-12 05:17:39,145 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41411 2023-07-12 05:17:39,145 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46253 2023-07-12 05:17:39,146 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:39,147 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43053,1689139058705] 2023-07-12 05:17:39,147 DEBUG [RS:2;jenkins-hbase20:43053] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,147 WARN [RS:2;jenkins-hbase20:43053] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:39,147 DEBUG [RS:0;jenkins-hbase20:36541] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,147 INFO [RS:2;jenkins-hbase20:43053] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:39,147 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,147 DEBUG [RS:1;jenkins-hbase20:38957] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,147 DEBUG [RS:0;jenkins-hbase20:36541] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,148 DEBUG [RS:1;jenkins-hbase20:38957] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,148 DEBUG [RS:0;jenkins-hbase20:36541] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,150 DEBUG [RS:1;jenkins-hbase20:38957] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,153 DEBUG [RS:0;jenkins-hbase20:36541] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:39,155 INFO [RS:0;jenkins-hbase20:36541] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:39,155 DEBUG [RS:1;jenkins-hbase20:38957] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:39,156 INFO [RS:1;jenkins-hbase20:38957] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:39,156 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:39,156 INFO [RS:0;jenkins-hbase20:36541] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:39,157 INFO [RS:0;jenkins-hbase20:36541] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:39,157 INFO [RS:1;jenkins-hbase20:38957] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:39,158 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,158 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 05:17:39,159 INFO [RS:1;jenkins-hbase20:38957] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:39,159 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,159 DEBUG [RS:2;jenkins-hbase20:43053] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,159 DEBUG [RS:2;jenkins-hbase20:43053] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,159 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:39,160 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:39,161 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,161 DEBUG [RS:2;jenkins-hbase20:43053] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,163 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,163 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,163 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:39,164 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/info 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:1;jenkins-hbase20:38957] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:39,164 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,164 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 05:17:39,165 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:39,165 INFO [RS:2;jenkins-hbase20:43053] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:39,165 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,165 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,170 INFO [RS:2;jenkins-hbase20:43053] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:39,166 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,172 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,172 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,173 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,173 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,173 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 05:17:39,173 DEBUG [RS:0;jenkins-hbase20:36541] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,173 INFO [RS:2;jenkins-hbase20:43053] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:39,173 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,174 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:39,175 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,176 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,176 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,177 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,177 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 05:17:39,178 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,178 DEBUG [RS:2;jenkins-hbase20:43053] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:39,183 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,183 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,183 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 05:17:39,183 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,184 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,188 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/table 2023-07-12 05:17:39,188 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 05:17:39,189 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,190 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740 2023-07-12 05:17:39,190 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740 2023-07-12 05:17:39,192 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 05:17:39,193 INFO [RS:1;jenkins-hbase20:38957] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:39,193 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38957,1689139058554-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,194 INFO [RS:0;jenkins-hbase20:36541] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:39,195 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36541,1689139058420-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,197 INFO [RS:2;jenkins-hbase20:43053] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:39,197 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43053,1689139058705-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,200 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 05:17:39,204 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:39,205 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9609861600, jitterRate=-0.10501189529895782}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 05:17:39,205 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 05:17:39,205 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 05:17:39,206 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 05:17:39,206 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 05:17:39,206 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 05:17:39,206 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 05:17:39,210 INFO [RS:0;jenkins-hbase20:36541] regionserver.Replication(203): jenkins-hbase20.apache.org,36541,1689139058420 started 2023-07-12 05:17:39,210 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36541,1689139058420, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36541, sessionid=0x1007f9d12c00001 2023-07-12 05:17:39,210 INFO [RS:1;jenkins-hbase20:38957] regionserver.Replication(203): jenkins-hbase20.apache.org,38957,1689139058554 started 2023-07-12 05:17:39,210 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:39,210 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:39,210 DEBUG [RS:0;jenkins-hbase20:36541] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,210 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38957,1689139058554, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38957, sessionid=0x1007f9d12c00002 2023-07-12 05:17:39,211 DEBUG [RS:0;jenkins-hbase20:36541] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36541,1689139058420' 2023-07-12 05:17:39,211 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:39,211 DEBUG [RS:1;jenkins-hbase20:38957] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,211 DEBUG [RS:1;jenkins-hbase20:38957] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38957,1689139058554' 2023-07-12 05:17:39,211 DEBUG [RS:1;jenkins-hbase20:38957] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:39,211 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 05:17:39,211 DEBUG [RS:0;jenkins-hbase20:36541] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:39,212 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 05:17:39,212 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 05:17:39,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 05:17:39,213 INFO [RS:2;jenkins-hbase20:43053] regionserver.Replication(203): jenkins-hbase20.apache.org,43053,1689139058705 started 2023-07-12 05:17:39,213 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43053,1689139058705, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43053, sessionid=0x1007f9d12c00003 2023-07-12 05:17:39,213 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:39,213 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 05:17:39,213 DEBUG [RS:2;jenkins-hbase20:43053] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,213 DEBUG [RS:2;jenkins-hbase20:43053] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43053,1689139058705' 2023-07-12 05:17:39,214 DEBUG [RS:2;jenkins-hbase20:43053] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:39,213 DEBUG [RS:1;jenkins-hbase20:38957] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:39,215 DEBUG [RS:2;jenkins-hbase20:43053] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:39,215 DEBUG [RS:0;jenkins-hbase20:36541] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:39,215 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:39,215 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:39,215 DEBUG [RS:1;jenkins-hbase20:38957] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:39,215 DEBUG [RS:1;jenkins-hbase20:38957] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38957,1689139058554' 2023-07-12 05:17:39,215 DEBUG [RS:1;jenkins-hbase20:38957] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:39,215 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:39,215 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:39,215 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:39,215 DEBUG [RS:2;jenkins-hbase20:43053] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,216 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 05:17:39,216 DEBUG [RS:1;jenkins-hbase20:38957] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:39,216 DEBUG [RS:2;jenkins-hbase20:43053] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43053,1689139058705' 2023-07-12 05:17:39,216 DEBUG [RS:2;jenkins-hbase20:43053] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:39,215 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:39,216 DEBUG [RS:0;jenkins-hbase20:36541] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:39,216 DEBUG [RS:0;jenkins-hbase20:36541] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36541,1689139058420' 2023-07-12 05:17:39,216 DEBUG [RS:0;jenkins-hbase20:36541] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:39,216 DEBUG [RS:1;jenkins-hbase20:38957] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:39,216 INFO [RS:1;jenkins-hbase20:38957] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:39,216 INFO [RS:1;jenkins-hbase20:38957] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:39,216 DEBUG [RS:2;jenkins-hbase20:43053] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:39,216 DEBUG [RS:0;jenkins-hbase20:36541] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:39,216 DEBUG [RS:2;jenkins-hbase20:43053] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:39,217 INFO [RS:2;jenkins-hbase20:43053] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:39,217 DEBUG [RS:0;jenkins-hbase20:36541] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:39,217 INFO [RS:2;jenkins-hbase20:43053] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:39,217 INFO [RS:0;jenkins-hbase20:36541] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:39,217 INFO [RS:0;jenkins-hbase20:36541] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:39,306 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 05:17:39,319 INFO [RS:1;jenkins-hbase20:38957] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38957%2C1689139058554, suffix=, logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,38957,1689139058554, archiveDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs, maxLogs=32 2023-07-12 05:17:39,319 INFO [RS:0;jenkins-hbase20:36541] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36541%2C1689139058420, suffix=, logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,36541,1689139058420, archiveDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs, maxLogs=32 2023-07-12 05:17:39,319 INFO [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43053%2C1689139058705, suffix=, logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,43053,1689139058705, archiveDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs, maxLogs=32 2023-07-12 05:17:39,358 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK] 2023-07-12 05:17:39,363 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK] 2023-07-12 05:17:39,364 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK] 2023-07-12 05:17:39,364 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK] 2023-07-12 05:17:39,364 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK] 2023-07-12 05:17:39,364 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK] 2023-07-12 05:17:39,366 DEBUG [jenkins-hbase20:46251] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 05:17:39,366 DEBUG [jenkins-hbase20:46251] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:39,366 DEBUG [jenkins-hbase20:46251] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:39,366 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK] 2023-07-12 05:17:39,366 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK] 2023-07-12 05:17:39,366 DEBUG [jenkins-hbase20:46251] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:39,367 DEBUG [jenkins-hbase20:46251] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:39,367 DEBUG [jenkins-hbase20:46251] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:39,368 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK] 2023-07-12 05:17:39,368 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43053,1689139058705, state=OPENING 2023-07-12 05:17:39,369 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 05:17:39,371 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:39,371 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 05:17:39,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43053,1689139058705}] 2023-07-12 05:17:39,390 INFO [RS:1;jenkins-hbase20:38957] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,38957,1689139058554/jenkins-hbase20.apache.org%2C38957%2C1689139058554.1689139059319 2023-07-12 05:17:39,395 INFO [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,43053,1689139058705/jenkins-hbase20.apache.org%2C43053%2C1689139058705.1689139059320 2023-07-12 05:17:39,395 DEBUG [RS:1;jenkins-hbase20:38957] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK], DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK], DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK]] 2023-07-12 05:17:39,395 INFO [RS:0;jenkins-hbase20:36541] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,36541,1689139058420/jenkins-hbase20.apache.org%2C36541%2C1689139058420.1689139059320 2023-07-12 05:17:39,396 DEBUG [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK], DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK], DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK]] 2023-07-12 05:17:39,398 DEBUG [RS:0;jenkins-hbase20:36541] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK], DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK], DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK]] 2023-07-12 05:17:39,547 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,547 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:39,549 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40388, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:39,553 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 05:17:39,553 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:39,555 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43053%2C1689139058705.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,43053,1689139058705, archiveDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs, maxLogs=32 2023-07-12 05:17:39,572 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK] 2023-07-12 05:17:39,573 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK] 2023-07-12 05:17:39,573 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK] 2023-07-12 05:17:39,577 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,43053,1689139058705/jenkins-hbase20.apache.org%2C43053%2C1689139058705.meta.1689139059556.meta 2023-07-12 05:17:39,578 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK], DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK], DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK]] 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 05:17:39,579 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 05:17:39,579 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 05:17:39,582 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 05:17:39,585 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/info 2023-07-12 05:17:39,585 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/info 2023-07-12 05:17:39,585 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 05:17:39,587 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,587 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 05:17:39,589 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:39,589 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/rep_barrier 2023-07-12 05:17:39,590 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 05:17:39,590 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,590 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 05:17:39,591 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/table 2023-07-12 05:17:39,592 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/table 2023-07-12 05:17:39,592 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 05:17:39,593 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:39,594 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740 2023-07-12 05:17:39,597 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740 2023-07-12 05:17:39,603 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 05:17:39,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 05:17:39,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10394165440, jitterRate=-0.03196790814399719}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 05:17:39,614 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 05:17:39,616 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689139059547 2023-07-12 05:17:39,625 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 05:17:39,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 05:17:39,627 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43053,1689139058705, state=OPEN 2023-07-12 05:17:39,628 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 05:17:39,628 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 05:17:39,633 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 05:17:39,633 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43053,1689139058705 in 257 msec 2023-07-12 05:17:39,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 05:17:39,640 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 422 msec 2023-07-12 05:17:39,644 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 607 msec 2023-07-12 05:17:39,644 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689139059644, completionTime=-1 2023-07-12 05:17:39,644 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 05:17:39,644 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 05:17:39,654 DEBUG [hconnection-0x2377f24b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:39,656 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:39,658 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 05:17:39,658 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:39,658 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689139119658 2023-07-12 05:17:39,658 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689139179658 2023-07-12 05:17:39,658 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 14 msec 2023-07-12 05:17:39,661 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 05:17:39,665 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 05:17:39,668 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46251,1689139058273-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,668 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46251,1689139058273-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,668 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46251,1689139058273-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,669 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:46251, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,669 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:39,669 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 05:17:39,669 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:39,669 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:39,670 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:39,670 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 05:17:39,673 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:39,673 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:39,674 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149 empty. 2023-07-12 05:17:39,674 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:39,674 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 05:17:39,675 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:39,675 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 05:17:39,679 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:39,681 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688 empty. 2023-07-12 05:17:39,682 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:39,682 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 05:17:39,760 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:39,762 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f545f95dd04fc91710fc223400fcc688, NAME => 'hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp 2023-07-12 05:17:39,772 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:39,773 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ab3ab2de8bb806ac68ad6a5825a00149, NAME => 'hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp 2023-07-12 05:17:39,822 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:39,822 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f545f95dd04fc91710fc223400fcc688, disabling compactions & flushes 2023-07-12 05:17:39,822 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:39,822 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:39,822 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. after waiting 0 ms 2023-07-12 05:17:39,822 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:39,822 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:39,823 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f545f95dd04fc91710fc223400fcc688: 2023-07-12 05:17:39,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:39,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing ab3ab2de8bb806ac68ad6a5825a00149, disabling compactions & flushes 2023-07-12 05:17:39,827 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:39,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:39,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. after waiting 0 ms 2023-07-12 05:17:39,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:39,827 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:39,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for ab3ab2de8bb806ac68ad6a5825a00149: 2023-07-12 05:17:39,828 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:39,829 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139059829"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139059829"}]},"ts":"1689139059829"} 2023-07-12 05:17:39,829 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:39,831 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139059831"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139059831"}]},"ts":"1689139059831"} 2023-07-12 05:17:39,832 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:39,832 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:39,832 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:39,833 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139059832"}]},"ts":"1689139059832"} 2023-07-12 05:17:39,833 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:39,833 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139059833"}]},"ts":"1689139059833"} 2023-07-12 05:17:39,834 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 05:17:39,834 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 05:17:39,836 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:39,836 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:39,836 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:39,836 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:39,836 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:39,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f545f95dd04fc91710fc223400fcc688, ASSIGN}] 2023-07-12 05:17:39,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:39,836 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:39,837 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:39,837 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:39,837 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:39,837 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ab2de8bb806ac68ad6a5825a00149, ASSIGN}] 2023-07-12 05:17:39,837 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f545f95dd04fc91710fc223400fcc688, ASSIGN 2023-07-12 05:17:39,839 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ab2de8bb806ac68ad6a5825a00149, ASSIGN 2023-07-12 05:17:39,839 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f545f95dd04fc91710fc223400fcc688, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43053,1689139058705; forceNewPlan=false, retain=false 2023-07-12 05:17:39,839 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ab2de8bb806ac68ad6a5825a00149, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43053,1689139058705; forceNewPlan=false, retain=false 2023-07-12 05:17:39,840 INFO [jenkins-hbase20:46251] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 05:17:39,842 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=ab3ab2de8bb806ac68ad6a5825a00149, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,842 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f545f95dd04fc91710fc223400fcc688, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:39,842 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139059842"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139059842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139059842"}]},"ts":"1689139059842"} 2023-07-12 05:17:39,842 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139059842"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139059842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139059842"}]},"ts":"1689139059842"} 2023-07-12 05:17:39,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure f545f95dd04fc91710fc223400fcc688, server=jenkins-hbase20.apache.org,43053,1689139058705}] 2023-07-12 05:17:39,845 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure ab3ab2de8bb806ac68ad6a5825a00149, server=jenkins-hbase20.apache.org,43053,1689139058705}] 2023-07-12 05:17:40,002 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:40,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab3ab2de8bb806ac68ad6a5825a00149, NAME => 'hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:40,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 05:17:40,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. service=MultiRowMutationService 2023-07-12 05:17:40,003 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 05:17:40,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:40,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,005 INFO [StoreOpener-ab3ab2de8bb806ac68ad6a5825a00149-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,007 DEBUG [StoreOpener-ab3ab2de8bb806ac68ad6a5825a00149-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/m 2023-07-12 05:17:40,007 DEBUG [StoreOpener-ab3ab2de8bb806ac68ad6a5825a00149-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/m 2023-07-12 05:17:40,007 INFO [StoreOpener-ab3ab2de8bb806ac68ad6a5825a00149-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab3ab2de8bb806ac68ad6a5825a00149 columnFamilyName m 2023-07-12 05:17:40,008 INFO [StoreOpener-ab3ab2de8bb806ac68ad6a5825a00149-1] regionserver.HStore(310): Store=ab3ab2de8bb806ac68ad6a5825a00149/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:40,008 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,009 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:40,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:40,013 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ab3ab2de8bb806ac68ad6a5825a00149; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@58dab0ca, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:40,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ab3ab2de8bb806ac68ad6a5825a00149: 2023-07-12 05:17:40,014 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149., pid=9, masterSystemTime=1689139059996 2023-07-12 05:17:40,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:40,016 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:40,016 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:40,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f545f95dd04fc91710fc223400fcc688, NAME => 'hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:40,017 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=ab3ab2de8bb806ac68ad6a5825a00149, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:40,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,017 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689139060017"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139060017"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139060017"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139060017"}]},"ts":"1689139060017"} 2023-07-12 05:17:40,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:40,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,018 INFO [StoreOpener-f545f95dd04fc91710fc223400fcc688-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 05:17:40,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure ab3ab2de8bb806ac68ad6a5825a00149, server=jenkins-hbase20.apache.org,43053,1689139058705 in 173 msec 2023-07-12 05:17:40,020 DEBUG [StoreOpener-f545f95dd04fc91710fc223400fcc688-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/info 2023-07-12 05:17:40,020 DEBUG [StoreOpener-f545f95dd04fc91710fc223400fcc688-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/info 2023-07-12 05:17:40,020 INFO [StoreOpener-f545f95dd04fc91710fc223400fcc688-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f545f95dd04fc91710fc223400fcc688 columnFamilyName info 2023-07-12 05:17:40,021 INFO [StoreOpener-f545f95dd04fc91710fc223400fcc688-1] regionserver.HStore(310): Store=f545f95dd04fc91710fc223400fcc688/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:40,021 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-12 05:17:40,021 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ab3ab2de8bb806ac68ad6a5825a00149, ASSIGN in 182 msec 2023-07-12 05:17:40,022 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:40,022 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139060022"}]},"ts":"1689139060022"} 2023-07-12 05:17:40,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,023 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 05:17:40,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:40,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:40,028 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f545f95dd04fc91710fc223400fcc688; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10835456800, jitterRate=0.009130552411079407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:40,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f545f95dd04fc91710fc223400fcc688: 2023-07-12 05:17:40,029 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688., pid=8, masterSystemTime=1689139059996 2023-07-12 05:17:40,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:40,030 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:40,031 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f545f95dd04fc91710fc223400fcc688, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:40,031 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689139060031"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139060031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139060031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139060031"}]},"ts":"1689139060031"} 2023-07-12 05:17:40,034 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 05:17:40,034 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure f545f95dd04fc91710fc223400fcc688, server=jenkins-hbase20.apache.org,43053,1689139058705 in 188 msec 2023-07-12 05:17:40,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-12 05:17:40,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f545f95dd04fc91710fc223400fcc688, ASSIGN in 198 msec 2023-07-12 05:17:40,036 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:40,036 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139060036"}]},"ts":"1689139060036"} 2023-07-12 05:17:40,037 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 05:17:40,071 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:40,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 413 msec 2023-07-12 05:17:40,077 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 05:17:40,078 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:40,078 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:40,079 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:40,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 409 msec 2023-07-12 05:17:40,082 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 05:17:40,088 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:40,090 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 8 msec 2023-07-12 05:17:40,093 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 05:17:40,099 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:40,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 7 msec 2023-07-12 05:17:40,107 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 05:17:40,108 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 05:17:40,108 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.226sec 2023-07-12 05:17:40,108 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 05:17:40,108 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 05:17:40,108 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 05:17:40,108 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46251,1689139058273-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 05:17:40,108 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46251,1689139058273-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 05:17:40,109 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 05:17:40,166 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 05:17:40,166 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 05:17:40,170 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:40,170 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:40,171 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:40,171 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 05:17:40,179 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ReadOnlyZKClient(139): Connect 0x1f1f6502 to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:40,183 DEBUG [Listener at localhost.localdomain/37977] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b40045b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:40,185 DEBUG [hconnection-0x22b9b4c0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:40,187 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40412, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:40,188 INFO [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:40,188 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:40,191 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 05:17:40,192 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51304, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 05:17:40,194 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 05:17:40,194 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:40,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-12 05:17:40,195 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ReadOnlyZKClient(139): Connect 0x51aa3d38 to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:40,203 DEBUG [Listener at localhost.localdomain/37977] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e5a9c79, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:40,204 INFO [Listener at localhost.localdomain/37977] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:55884 2023-07-12 05:17:40,206 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:40,210 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1007f9d12c0000a connected 2023-07-12 05:17:40,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:40,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:40,217 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 05:17:40,230 INFO [Listener at localhost.localdomain/37977] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 05:17:40,230 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:40,230 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:40,230 INFO [Listener at localhost.localdomain/37977] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 05:17:40,230 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 05:17:40,230 INFO [Listener at localhost.localdomain/37977] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 05:17:40,231 INFO [Listener at localhost.localdomain/37977] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 05:17:40,231 INFO [Listener at localhost.localdomain/37977] ipc.NettyRpcServer(120): Bind to /148.251.75.209:34301 2023-07-12 05:17:40,232 INFO [Listener at localhost.localdomain/37977] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 05:17:40,233 DEBUG [Listener at localhost.localdomain/37977] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 05:17:40,234 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:40,235 INFO [Listener at localhost.localdomain/37977] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 05:17:40,236 INFO [Listener at localhost.localdomain/37977] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34301 connecting to ZooKeeper ensemble=127.0.0.1:55884 2023-07-12 05:17:40,239 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:343010x0, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 05:17:40,241 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(162): regionserver:343010x0, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 05:17:40,241 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34301-0x1007f9d12c0000b connected 2023-07-12 05:17:40,242 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(162): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 05:17:40,243 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ZKUtil(164): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 05:17:40,245 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34301 2023-07-12 05:17:40,245 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34301 2023-07-12 05:17:40,247 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34301 2023-07-12 05:17:40,247 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34301 2023-07-12 05:17:40,247 DEBUG [Listener at localhost.localdomain/37977] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34301 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 05:17:40,249 INFO [Listener at localhost.localdomain/37977] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 05:17:40,250 INFO [Listener at localhost.localdomain/37977] http.HttpServer(1146): Jetty bound to port 42333 2023-07-12 05:17:40,250 INFO [Listener at localhost.localdomain/37977] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 05:17:40,251 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:40,251 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@18f07b38{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,AVAILABLE} 2023-07-12 05:17:40,251 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:40,251 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e0b3138{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 05:17:40,342 INFO [Listener at localhost.localdomain/37977] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 05:17:40,343 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 05:17:40,343 INFO [Listener at localhost.localdomain/37977] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 05:17:40,343 INFO [Listener at localhost.localdomain/37977] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 05:17:40,344 INFO [Listener at localhost.localdomain/37977] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 05:17:40,345 INFO [Listener at localhost.localdomain/37977] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@68b4c5d5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/java.io.tmpdir/jetty-0_0_0_0-42333-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4913472594641268421/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:40,346 INFO [Listener at localhost.localdomain/37977] server.AbstractConnector(333): Started ServerConnector@6aa69225{HTTP/1.1, (http/1.1)}{0.0.0.0:42333} 2023-07-12 05:17:40,346 INFO [Listener at localhost.localdomain/37977] server.Server(415): Started @44276ms 2023-07-12 05:17:40,349 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(951): ClusterId : 0797ad40-1c45-4915-bd65-43eff252aff1 2023-07-12 05:17:40,349 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 05:17:40,350 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 05:17:40,350 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 05:17:40,351 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 05:17:40,353 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ReadOnlyZKClient(139): Connect 0x77eaef25 to 127.0.0.1:55884 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 05:17:40,358 DEBUG [RS:3;jenkins-hbase20:34301] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@792d0fa8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 05:17:40,359 DEBUG [RS:3;jenkins-hbase20:34301] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d128455, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:40,366 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:34301 2023-07-12 05:17:40,366 INFO [RS:3;jenkins-hbase20:34301] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 05:17:40,366 INFO [RS:3;jenkins-hbase20:34301] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 05:17:40,366 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 05:17:40,366 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,46251,1689139058273 with isa=jenkins-hbase20.apache.org/148.251.75.209:34301, startcode=1689139060229 2023-07-12 05:17:40,367 DEBUG [RS:3;jenkins-hbase20:34301] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 05:17:40,369 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39677, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 05:17:40,369 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46251] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,370 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 05:17:40,370 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad 2023-07-12 05:17:40,370 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41411 2023-07-12 05:17:40,370 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46253 2023-07-12 05:17:40,376 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:40,376 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:40,376 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:40,376 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:40,376 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:40,376 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ZKUtil(162): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,376 WARN [RS:3;jenkins-hbase20:34301] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 05:17:40,376 INFO [RS:3;jenkins-hbase20:34301] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 05:17:40,376 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 05:17:40,376 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,376 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,34301,1689139060229] 2023-07-12 05:17:40,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:40,377 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 05:17:40,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:40,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:40,380 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:40,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,384 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:40,385 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:40,386 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ZKUtil(162): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,386 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ZKUtil(162): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:40,387 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ZKUtil(162): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,387 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ZKUtil(162): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:40,387 DEBUG [RS:3;jenkins-hbase20:34301] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 05:17:40,388 INFO [RS:3;jenkins-hbase20:34301] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 05:17:40,389 INFO [RS:3;jenkins-hbase20:34301] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 05:17:40,389 INFO [RS:3;jenkins-hbase20:34301] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 05:17:40,389 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:40,392 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 05:17:40,394 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,394 DEBUG [RS:3;jenkins-hbase20:34301] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 05:17:40,395 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:40,395 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:40,395 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:40,404 INFO [RS:3;jenkins-hbase20:34301] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 05:17:40,404 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34301,1689139060229-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 05:17:40,415 INFO [RS:3;jenkins-hbase20:34301] regionserver.Replication(203): jenkins-hbase20.apache.org,34301,1689139060229 started 2023-07-12 05:17:40,415 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,34301,1689139060229, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:34301, sessionid=0x1007f9d12c0000b 2023-07-12 05:17:40,415 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 05:17:40,415 DEBUG [RS:3;jenkins-hbase20:34301] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,415 DEBUG [RS:3;jenkins-hbase20:34301] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34301,1689139060229' 2023-07-12 05:17:40,415 DEBUG [RS:3;jenkins-hbase20:34301] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 05:17:40,415 DEBUG [RS:3;jenkins-hbase20:34301] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 05:17:40,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34301,1689139060229' 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 05:17:40,416 DEBUG [RS:3;jenkins-hbase20:34301] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 05:17:40,416 INFO [RS:3;jenkins-hbase20:34301] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 05:17:40,416 INFO [RS:3;jenkins-hbase20:34301] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 05:17:40,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:40,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:40,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:40,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:40,422 DEBUG [hconnection-0x6698dcad-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 05:17:40,423 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40426, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 05:17:40,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:40,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:40,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:40,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:40,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:51304 deadline: 1689140260433, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:40,434 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:40,435 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:40,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:40,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:40,437 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:40,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:40,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:40,482 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=551 (was 505) Potentially hanging thread: RS:3;jenkins-hbase20:34301 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1684910180-2235 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:41411 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:38110 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1684910180-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 37977 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63349@0x51dcc376-SendThread(127.0.0.1:63349) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp590402102-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1119340192@qtp-232457353-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost.localdomain/37977-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost.localdomain/37977-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@7b82864b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp73045089-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2007051265_17 at /127.0.0.1:38776 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp61291414-2180 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x6d1a89af sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:41411 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:36357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase20:43053-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x25f1539f-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp73045089-2278 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63349@0x51dcc376-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:41411 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase20:46251 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x51aa3d38-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x59f81d7e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1730102302_17 at /127.0.0.1:45640 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:38168 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2091727337_17 at /127.0.0.1:38862 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6698dcad-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad-prefix:jenkins-hbase20.apache.org,43053,1689139058705.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37977 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp590402102-2205 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1684910180-2236-acceptor-0@6f34df82-ServerConnector@693c409e{HTTP/1.1, (http/1.1)}{0.0.0.0:41211} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data3/current/BP-1936111480-148.251.75.209-1689139057631 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x1f1f6502 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2377f24b-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x6d1a89af-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad-prefix:jenkins-hbase20.apache.org,38957,1689139058554 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:36357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/37977-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x2377f24b-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1730102302_17 at /127.0.0.1:38146 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase20:36541 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5bcfe975 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp590402102-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x77eaef25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37977 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 2004032432@qtp-751815519-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1826676474-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6698dcad-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2377f24b-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@75797253 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37977 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data6/current/BP-1936111480-148.251.75.209-1689139057631 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp590402102-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp61291414-2179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:36357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost.localdomain/37977-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-6bda9372-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x51aa3d38-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63349@0x51dcc376 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x25f1539f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase20:36541-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x0ea2ce13-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1684910180-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826676474-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp399544612-2538 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:36357 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 41411 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44483,1689139052348 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Session-HouseKeeper-171e026a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2007051265_17 at /127.0.0.1:38806 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2007051265_17 at /127.0.0.1:45614 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@64508cd8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp399544612-2540 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:36541Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 0 on default port 36447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad-prefix:jenkins-hbase20.apache.org,43053,1689139058705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp61291414-2177 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:38878 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:38957Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase20:38957 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:34301-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1512446910) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x59f81d7e-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp73045089-2277 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1865781919@qtp-751815519-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40495 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139059055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 42427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase20:34301Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:41411 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:36357 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp590402102-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 42427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost.localdomain/37977.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:41411 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2981876d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1826676474-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp399544612-2545 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:41411 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@14325e7f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,46251,1689139058273 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp399544612-2539-acceptor-0@310d7425-ServerConnector@6aa69225{HTTP/1.1, (http/1.1)}{0.0.0.0:42333} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x77eaef25-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x2377f24b-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/42409-SendThread(127.0.0.1:63349) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6cade318 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2377f24b-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@775edf5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp73045089-2280-acceptor-0@20a3c930-ServerConnector@a9d7a9a{HTTP/1.1, (http/1.1)}{0.0.0.0:39815} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2091727337_17 at /127.0.0.1:45656 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1840982923@qtp-1175080442-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x1f1f6502-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1684910180-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5b6a266d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826676474-2265 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7f7eaa4a[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-538-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data1/current/BP-1936111480-148.251.75.209-1689139057631 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp399544612-2541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 877128245@qtp-1716392617-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp399544612-2543 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad-prefix:jenkins-hbase20.apache.org,36541,1689139058420 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:36357 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp73045089-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-22422a07-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost.localdomain/37977-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2091727337_17 at /127.0.0.1:38152 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:43053Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:36357 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139059055 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:45660 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@d13391c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp399544612-2542 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x59f81d7e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x25f1539f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 36447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase20:38957-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/37977.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp61291414-2174 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6ef9e4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server idle connection scanner for port 37977 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp61291414-2181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5c0f5d69[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1684910180-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp73045089-2276 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp73045089-2279 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1174515783.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data5/current/BP-1936111480-148.251.75.209-1689139057631 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:55884 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: pool-536-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x1f1f6502-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp590402102-2212 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2377f24b-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp61291414-2178 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:41411 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp73045089-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6747fdb7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 358100573@qtp-1175080442-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42177 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp590402102-2206-acceptor-0@453f38a3-ServerConnector@470cc644{HTTP/1.1, (http/1.1)}{0.0.0.0:46539} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1684910180-2242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:55884): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x0ea2ce13-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x6d1a89af-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:36357 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 1 on default port 42427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:2;jenkins-hbase20:43053 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp590402102-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x77eaef25-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1730102302_17 at /127.0.0.1:38834 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 41411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData-prefix:jenkins-hbase20.apache.org,46251,1689139058273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:41411 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost.localdomain/37977-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data2/current/BP-1936111480-148.251.75.209-1689139057631 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:38848 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2005b37e sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data4/current/BP-1936111480-148.251.75.209-1689139057631 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2377f24b-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42427 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1826676474-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1684910180-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x0ea2ce13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1276386159@qtp-232457353-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46767 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:45654 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:41411 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@fba33c2[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1337998103) connection to localhost.localdomain/127.0.0.1:36357 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1826676474-2266-acceptor-0@6fb7cf3c-ServerConnector@46715e66{HTTP/1.1, (http/1.1)}{0.0.0.0:42167} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@333c0bae java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 37977 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 2135413989@qtp-1716392617-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37615 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x22b9b4c0-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1836028515_17 at /127.0.0.1:38166 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Server idle connection scanner for port 36447 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/42409-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 42427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1826676474-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp61291414-2175-acceptor-0@73529959-ServerConnector@11bf68c0{HTTP/1.1, (http/1.1)}{0.0.0.0:46253} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5c9f433e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp399544612-2544 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977-SendThread(127.0.0.1:55884) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2007051265_17 at /127.0.0.1:38126 [Receiving block BP-1936111480-148.251.75.209-1689139057631:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826676474-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp61291414-2176 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2377f24b-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55884@0x51aa3d38 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1999710476.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37977.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1936111480-148.251.75.209-1689139057631:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 41411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41411 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=820 (was 781) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 525) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=2601 (was 3290) 2023-07-12 05:17:40,484 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=551 is superior to 500 2023-07-12 05:17:40,499 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=550, OpenFileDescriptor=820, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=170, AvailableMemoryMB=2600 2023-07-12 05:17:40,499 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=550 is superior to 500 2023-07-12 05:17:40,499 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-12 05:17:40,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:40,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:40,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:40,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:40,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:40,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:40,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:40,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:40,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:40,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:40,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:40,512 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:40,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:40,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:40,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:40,518 INFO [RS:3;jenkins-hbase20:34301] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34301%2C1689139060229, suffix=, logDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,34301,1689139060229, archiveDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs, maxLogs=32 2023-07-12 05:17:40,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:40,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:40,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:40,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:40,536 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK] 2023-07-12 05:17:40,536 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK] 2023-07-12 05:17:40,537 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK] 2023-07-12 05:17:40,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:40,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:40,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:51304 deadline: 1689140260537, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:40,538 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:40,538 INFO [RS:3;jenkins-hbase20:34301] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/WALs/jenkins-hbase20.apache.org,34301,1689139060229/jenkins-hbase20.apache.org%2C34301%2C1689139060229.1689139060519 2023-07-12 05:17:40,540 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:40,540 DEBUG [RS:3;jenkins-hbase20:34301] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40341,DS-7f8dc505-c102-4b3a-a66a-76b7ec992c42,DISK], DatanodeInfoWithStorage[127.0.0.1:39137,DS-0b7ae179-f274-45be-ba4f-2fcfbff52803,DISK], DatanodeInfoWithStorage[127.0.0.1:34113,DS-ea86c4e2-5b50-4ea2-b500-898ea86e0006,DISK]] 2023-07-12 05:17:40,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:40,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:40,541 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:40,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:40,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:40,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:40,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 05:17:40,546 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:40,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-12 05:17:40,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 05:17:40,548 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:40,548 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:40,549 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:40,550 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 05:17:40,552 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,553 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 empty. 2023-07-12 05:17:40,554 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,554 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 05:17:40,568 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-12 05:17:40,569 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 40b44bfefb26fe70f6d09fbcdb6fb490, NAME => 't1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp 2023-07-12 05:17:40,577 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:40,577 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 40b44bfefb26fe70f6d09fbcdb6fb490, disabling compactions & flushes 2023-07-12 05:17:40,577 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,577 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,577 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. after waiting 0 ms 2023-07-12 05:17:40,577 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,577 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,577 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 40b44bfefb26fe70f6d09fbcdb6fb490: 2023-07-12 05:17:40,580 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 05:17:40,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139060580"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139060580"}]},"ts":"1689139060580"} 2023-07-12 05:17:40,582 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 05:17:40,583 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 05:17:40,583 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139060583"}]},"ts":"1689139060583"} 2023-07-12 05:17:40,584 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-12 05:17:40,586 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 05:17:40,586 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 05:17:40,586 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 05:17:40,586 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 05:17:40,586 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 05:17:40,586 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 05:17:40,587 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, ASSIGN}] 2023-07-12 05:17:40,587 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, ASSIGN 2023-07-12 05:17:40,588 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36541,1689139058420; forceNewPlan=false, retain=false 2023-07-12 05:17:40,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 05:17:40,738 INFO [jenkins-hbase20:46251] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 05:17:40,740 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=40b44bfefb26fe70f6d09fbcdb6fb490, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,740 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139060740"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139060740"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139060740"}]},"ts":"1689139060740"} 2023-07-12 05:17:40,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 40b44bfefb26fe70f6d09fbcdb6fb490, server=jenkins-hbase20.apache.org,36541,1689139058420}] 2023-07-12 05:17:40,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 05:17:40,907 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,908 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 05:17:40,910 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 05:17:40,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 40b44bfefb26fe70f6d09fbcdb6fb490, NAME => 't1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.', STARTKEY => '', ENDKEY => ''} 2023-07-12 05:17:40,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 05:17:40,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,916 INFO [StoreOpener-40b44bfefb26fe70f6d09fbcdb6fb490-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,917 DEBUG [StoreOpener-40b44bfefb26fe70f6d09fbcdb6fb490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/cf1 2023-07-12 05:17:40,917 DEBUG [StoreOpener-40b44bfefb26fe70f6d09fbcdb6fb490-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/cf1 2023-07-12 05:17:40,918 INFO [StoreOpener-40b44bfefb26fe70f6d09fbcdb6fb490-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 40b44bfefb26fe70f6d09fbcdb6fb490 columnFamilyName cf1 2023-07-12 05:17:40,918 INFO [StoreOpener-40b44bfefb26fe70f6d09fbcdb6fb490-1] regionserver.HStore(310): Store=40b44bfefb26fe70f6d09fbcdb6fb490/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 05:17:40,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:40,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 05:17:40,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 40b44bfefb26fe70f6d09fbcdb6fb490; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10177692640, jitterRate=-0.05212850868701935}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 05:17:40,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 40b44bfefb26fe70f6d09fbcdb6fb490: 2023-07-12 05:17:40,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490., pid=14, masterSystemTime=1689139060907 2023-07-12 05:17:40,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:40,931 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=40b44bfefb26fe70f6d09fbcdb6fb490, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:40,932 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139060931"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689139060931"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689139060931"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689139060931"}]},"ts":"1689139060931"} 2023-07-12 05:17:40,934 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-12 05:17:40,934 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 40b44bfefb26fe70f6d09fbcdb6fb490, server=jenkins-hbase20.apache.org,36541,1689139058420 in 192 msec 2023-07-12 05:17:40,935 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 05:17:40,935 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, ASSIGN in 347 msec 2023-07-12 05:17:40,936 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 05:17:40,936 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139060936"}]},"ts":"1689139060936"} 2023-07-12 05:17:40,937 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-12 05:17:40,939 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 05:17:40,940 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 395 msec 2023-07-12 05:17:41,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 05:17:41,152 INFO [Listener at localhost.localdomain/37977] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-12 05:17:41,152 DEBUG [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-12 05:17:41,153 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,155 INFO [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-12 05:17:41,155 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,155 INFO [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-12 05:17:41,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 05:17:41,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 05:17:41,159 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 05:17:41,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 05:17:41,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 148.251.75.209:51304 deadline: 1689139121157, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-12 05:17:41,162 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,163 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-12 05:17:41,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,264 INFO [Listener at localhost.localdomain/37977] client.HBaseAdmin$15(890): Started disable of t1 2023-07-12 05:17:41,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable t1 2023-07-12 05:17:41,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-12 05:17:41,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 05:17:41,271 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139061271"}]},"ts":"1689139061271"} 2023-07-12 05:17:41,274 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-12 05:17:41,275 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-12 05:17:41,276 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, UNASSIGN}] 2023-07-12 05:17:41,277 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, UNASSIGN 2023-07-12 05:17:41,277 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=40b44bfefb26fe70f6d09fbcdb6fb490, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:41,277 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139061277"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689139061277"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689139061277"}]},"ts":"1689139061277"} 2023-07-12 05:17:41,279 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 40b44bfefb26fe70f6d09fbcdb6fb490, server=jenkins-hbase20.apache.org,36541,1689139058420}] 2023-07-12 05:17:41,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 05:17:41,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:41,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 40b44bfefb26fe70f6d09fbcdb6fb490, disabling compactions & flushes 2023-07-12 05:17:41,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:41,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:41,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. after waiting 0 ms 2023-07-12 05:17:41,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:41,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 05:17:41,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490. 2023-07-12 05:17:41,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 40b44bfefb26fe70f6d09fbcdb6fb490: 2023-07-12 05:17:41,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:41,437 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=40b44bfefb26fe70f6d09fbcdb6fb490, regionState=CLOSED 2023-07-12 05:17:41,437 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689139061437"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689139061437"}]},"ts":"1689139061437"} 2023-07-12 05:17:41,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 05:17:41,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 40b44bfefb26fe70f6d09fbcdb6fb490, server=jenkins-hbase20.apache.org,36541,1689139058420 in 164 msec 2023-07-12 05:17:41,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 05:17:41,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=40b44bfefb26fe70f6d09fbcdb6fb490, UNASSIGN in 168 msec 2023-07-12 05:17:41,446 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689139061446"}]},"ts":"1689139061446"} 2023-07-12 05:17:41,447 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-12 05:17:41,448 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-12 05:17:41,450 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 184 msec 2023-07-12 05:17:41,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 05:17:41,576 INFO [Listener at localhost.localdomain/37977] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-12 05:17:41,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete t1 2023-07-12 05:17:41,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-12 05:17:41,582 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 05:17:41,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-12 05:17:41,583 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-12 05:17:41,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,586 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:41,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 05:17:41,589 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/cf1, FileablePath, hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/recovered.edits] 2023-07-12 05:17:41,593 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/recovered.edits/4.seqid to hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/archive/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490/recovered.edits/4.seqid 2023-07-12 05:17:41,594 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/.tmp/data/default/t1/40b44bfefb26fe70f6d09fbcdb6fb490 2023-07-12 05:17:41,594 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 05:17:41,596 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-12 05:17:41,597 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-12 05:17:41,599 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-12 05:17:41,600 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-12 05:17:41,600 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-12 05:17:41,600 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689139061600"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:41,601 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 05:17:41,601 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 40b44bfefb26fe70f6d09fbcdb6fb490, NAME => 't1,,1689139060543.40b44bfefb26fe70f6d09fbcdb6fb490.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 05:17:41,601 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-12 05:17:41,602 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689139061602"}]},"ts":"9223372036854775807"} 2023-07-12 05:17:41,603 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-12 05:17:41,605 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 05:17:41,606 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 27 msec 2023-07-12 05:17:41,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 05:17:41,689 INFO [Listener at localhost.localdomain/37977] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-12 05:17:41,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:41,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:41,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:41,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:41,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:41,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:41,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:41,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:41,701 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:41,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:41,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:41,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:41,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:51304 deadline: 1689140261711, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:41,711 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:41,714 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,716 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:41,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,733 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=564 (was 550) - Thread LEAK? -, OpenFileDescriptor=826 (was 820) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=170 (was 170), AvailableMemoryMB=2596 (was 2600) 2023-07-12 05:17:41,733 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-12 05:17:41,748 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=564, OpenFileDescriptor=826, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=170, AvailableMemoryMB=2596 2023-07-12 05:17:41,748 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-12 05:17:41,748 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-12 05:17:41,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:41,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:41,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:41,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:41,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:41,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:41,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:41,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:41,760 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:41,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:41,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:41,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:41,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:51304 deadline: 1689140261768, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:41,768 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:41,770 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,770 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:41,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 05:17:41,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:41,773 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-12 05:17:41,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 05:17:41,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 05:17:41,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:41,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:41,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:41,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:41,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:41,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:41,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:41,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:41,789 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:41,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:41,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:41,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:41,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:51304 deadline: 1689140261798, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:41,799 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:41,801 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,802 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:41,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,817 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=566 (was 564) - Thread LEAK? -, OpenFileDescriptor=826 (was 826), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=170 (was 170), AvailableMemoryMB=2596 (was 2596) 2023-07-12 05:17:41,817 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=566 is superior to 500 2023-07-12 05:17:41,834 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=566, OpenFileDescriptor=826, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=170, AvailableMemoryMB=2595 2023-07-12 05:17:41,834 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=566 is superior to 500 2023-07-12 05:17:41,834 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-12 05:17:41,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:41,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:41,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:41,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:41,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:41,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:41,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:41,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:41,847 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:41,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:41,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:41,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:41,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:51304 deadline: 1689140261866, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:41,867 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:41,869 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,870 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:41,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:41,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:41,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:41,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:41,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:41,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:41,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:41,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:41,885 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:41,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:41,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:41,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:41,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:51304 deadline: 1689140261915, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:41,915 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:41,916 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,918 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:41,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,936 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=567 (was 566) - Thread LEAK? -, OpenFileDescriptor=826 (was 826), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=170 (was 170), AvailableMemoryMB=2595 (was 2595) 2023-07-12 05:17:41,936 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-12 05:17:41,955 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=567, OpenFileDescriptor=826, MaxFileDescriptor=60000, SystemLoadAverage=528, ProcessCount=170, AvailableMemoryMB=2595 2023-07-12 05:17:41,955 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-12 05:17:41,955 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-12 05:17:41,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:41,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:41,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:41,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:41,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:41,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:41,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:41,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:41,967 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:41,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:41,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:41,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:41,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:41,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:51304 deadline: 1689140261975, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:41,975 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:41,977 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:41,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,978 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:41,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:41,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:41,979 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-12 05:17:41,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_foo 2023-07-12 05:17:41,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 05:17:41,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:41,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:41,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 05:17:41,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:41,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:41,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:41,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 05:17:41,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:41,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 05:17:42,002 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:42,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 16 msec 2023-07-12 05:17:42,095 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 05:17:42,095 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 05:17:42,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 05:17:42,095 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:42,095 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 05:17:42,096 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 05:17:42,096 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 05:17:42,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-12 05:17:42,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:42,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 148.251.75.209:51304 deadline: 1689140262096, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-12 05:17:42,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$16(3053): Client=jenkins//148.251.75.209 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 05:17:42,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 05:17:42,119 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 05:17:42,120 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-12 05:17:42,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 05:17:42,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_anotherGroup 2023-07-12 05:17:42,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 05:17:42,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:42,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 05:17:42,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:42,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 05:17:42,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:42,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:42,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:42,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete Group_foo 2023-07-12 05:17:42,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,239 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,241 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 05:17:42,242 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,244 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 05:17:42,244 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 05:17:42,244 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,246 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 05:17:42,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-12 05:17:42,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 05:17:42,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-12 05:17:42,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 05:17:42,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:42,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:42,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 05:17:42,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:42,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:42,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:42,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:42,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 148.251.75.209:51304 deadline: 1689139122361, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-12 05:17:42,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:42,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:42,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:42,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:42,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:42,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:42,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:42,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_anotherGroup 2023-07-12 05:17:42,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:42,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:42,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 05:17:42,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:42,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 05:17:42,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 05:17:42,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 05:17:42,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 05:17:42,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 05:17:42,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 05:17:42,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:42,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 05:17:42,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 05:17:42,388 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 05:17:42,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 05:17:42,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 05:17:42,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 05:17:42,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 05:17:42,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 05:17:42,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:42,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:42,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:46251] to rsgroup master 2023-07-12 05:17:42,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 05:17:42,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:51304 deadline: 1689140262398, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. 2023-07-12 05:17:42,399 WARN [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:46251 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 05:17:42,401 INFO [Listener at localhost.localdomain/37977] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 05:17:42,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 05:17:42,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 05:17:42,402 INFO [Listener at localhost.localdomain/37977] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34301, jenkins-hbase20.apache.org:36541, jenkins-hbase20.apache.org:38957, jenkins-hbase20.apache.org:43053], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 05:17:42,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 05:17:42,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46251] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 05:17:42,420 INFO [Listener at localhost.localdomain/37977] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=567 (was 567), OpenFileDescriptor=826 (was 826), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=528 (was 528), ProcessCount=170 (was 170), AvailableMemoryMB=2593 (was 2595) 2023-07-12 05:17:42,420 WARN [Listener at localhost.localdomain/37977] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-12 05:17:42,420 INFO [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 05:17:42,420 INFO [Listener at localhost.localdomain/37977] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 05:17:42,420 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f1f6502 to 127.0.0.1:55884 2023-07-12 05:17:42,420 DEBUG [Listener at localhost.localdomain/37977] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,420 DEBUG [Listener at localhost.localdomain/37977] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 05:17:42,420 DEBUG [Listener at localhost.localdomain/37977] util.JVMClusterUtil(257): Found active master hash=2018509792, stopped=false 2023-07-12 05:17:42,420 DEBUG [Listener at localhost.localdomain/37977] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 05:17:42,421 DEBUG [Listener at localhost.localdomain/37977] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 05:17:42,421 INFO [Listener at localhost.localdomain/37977] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:42,421 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:42,421 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:42,421 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:42,421 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:42,421 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 05:17:42,422 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:42,422 INFO [Listener at localhost.localdomain/37977] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 05:17:42,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:42,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:42,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:42,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:42,422 DEBUG [Listener at localhost.localdomain/37977] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ea2ce13 to 127.0.0.1:55884 2023-07-12 05:17:42,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 05:17:42,422 DEBUG [Listener at localhost.localdomain/37977] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,422 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,36541,1689139058420' ***** 2023-07-12 05:17:42,423 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:42,423 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,38957,1689139058554' ***** 2023-07-12 05:17:42,423 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:42,423 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:42,423 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,43053,1689139058705' ***** 2023-07-12 05:17:42,424 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:42,423 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:42,424 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:42,424 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,34301,1689139060229' ***** 2023-07-12 05:17:42,427 INFO [Listener at localhost.localdomain/37977] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 05:17:42,427 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:42,432 INFO [RS:0;jenkins-hbase20:36541] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5cb25272{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:42,432 INFO [RS:3;jenkins-hbase20:34301] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@68b4c5d5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:42,432 INFO [RS:1;jenkins-hbase20:38957] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4faab4a8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:42,433 INFO [RS:2;jenkins-hbase20:43053] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@165bed6e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 05:17:42,433 INFO [RS:3;jenkins-hbase20:34301] server.AbstractConnector(383): Stopped ServerConnector@6aa69225{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:42,433 INFO [RS:1;jenkins-hbase20:38957] server.AbstractConnector(383): Stopped ServerConnector@693c409e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:42,433 INFO [RS:0;jenkins-hbase20:36541] server.AbstractConnector(383): Stopped ServerConnector@470cc644{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:42,433 INFO [RS:1;jenkins-hbase20:38957] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:42,433 INFO [RS:3;jenkins-hbase20:34301] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:42,433 INFO [RS:2;jenkins-hbase20:43053] server.AbstractConnector(383): Stopped ServerConnector@46715e66{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:42,433 INFO [RS:0;jenkins-hbase20:36541] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:42,435 INFO [RS:3;jenkins-hbase20:34301] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e0b3138{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:42,436 INFO [RS:0;jenkins-hbase20:36541] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a833aa2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:42,437 INFO [RS:3;jenkins-hbase20:34301] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@18f07b38{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:42,437 INFO [RS:0;jenkins-hbase20:36541] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5506ba92{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:42,435 INFO [RS:2;jenkins-hbase20:43053] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:42,434 INFO [RS:1;jenkins-hbase20:38957] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6b2bbe25{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:42,439 INFO [RS:0;jenkins-hbase20:36541] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:42,439 INFO [RS:3;jenkins-hbase20:34301] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:42,439 INFO [RS:1;jenkins-hbase20:38957] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3acab148{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:42,439 INFO [RS:2;jenkins-hbase20:43053] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@27ea5a91{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:42,439 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:42,439 INFO [RS:3;jenkins-hbase20:34301] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:42,439 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:42,439 INFO [RS:0;jenkins-hbase20:36541] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:42,440 INFO [RS:1;jenkins-hbase20:38957] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:42,440 INFO [RS:3;jenkins-hbase20:34301] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:42,441 INFO [RS:1;jenkins-hbase20:38957] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:42,440 INFO [RS:2;jenkins-hbase20:43053] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c7d608e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:42,441 INFO [RS:1;jenkins-hbase20:38957] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:42,441 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:42,441 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:42,441 DEBUG [RS:3;jenkins-hbase20:34301] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x77eaef25 to 127.0.0.1:55884 2023-07-12 05:17:42,440 INFO [RS:0;jenkins-hbase20:36541] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:42,441 DEBUG [RS:3;jenkins-hbase20:34301] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,441 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,34301,1689139060229; all regions closed. 2023-07-12 05:17:42,441 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:42,441 INFO [RS:2;jenkins-hbase20:43053] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 05:17:42,441 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:42,441 DEBUG [RS:1;jenkins-hbase20:38957] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x59f81d7e to 127.0.0.1:55884 2023-07-12 05:17:42,441 DEBUG [RS:0;jenkins-hbase20:36541] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x25f1539f to 127.0.0.1:55884 2023-07-12 05:17:42,441 DEBUG [RS:1;jenkins-hbase20:38957] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,442 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38957,1689139058554; all regions closed. 2023-07-12 05:17:42,441 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 05:17:42,441 INFO [RS:2;jenkins-hbase20:43053] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 05:17:42,442 INFO [RS:2;jenkins-hbase20:43053] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 05:17:42,442 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(3305): Received CLOSE for f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:42,441 DEBUG [RS:0;jenkins-hbase20:36541] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,442 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36541,1689139058420; all regions closed. 2023-07-12 05:17:42,443 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(3305): Received CLOSE for ab3ab2de8bb806ac68ad6a5825a00149 2023-07-12 05:17:42,443 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:42,443 DEBUG [RS:2;jenkins-hbase20:43053] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6d1a89af to 127.0.0.1:55884 2023-07-12 05:17:42,443 DEBUG [RS:2;jenkins-hbase20:43053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,443 INFO [RS:2;jenkins-hbase20:43053] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:42,443 INFO [RS:2;jenkins-hbase20:43053] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:42,443 INFO [RS:2;jenkins-hbase20:43053] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:42,443 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 05:17:42,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f545f95dd04fc91710fc223400fcc688, disabling compactions & flushes 2023-07-12 05:17:42,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:42,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:42,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. after waiting 0 ms 2023-07-12 05:17:42,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:42,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing f545f95dd04fc91710fc223400fcc688 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-12 05:17:42,450 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-12 05:17:42,450 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, f545f95dd04fc91710fc223400fcc688=hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688., ab3ab2de8bb806ac68ad6a5825a00149=hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149.} 2023-07-12 05:17:42,450 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 05:17:42,450 DEBUG [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1504): Waiting on 1588230740, ab3ab2de8bb806ac68ad6a5825a00149, f545f95dd04fc91710fc223400fcc688 2023-07-12 05:17:42,450 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 05:17:42,451 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 05:17:42,451 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 05:17:42,451 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 05:17:42,452 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.82 KB 2023-07-12 05:17:42,459 DEBUG [RS:3;jenkins-hbase20:34301] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs 2023-07-12 05:17:42,459 INFO [RS:3;jenkins-hbase20:34301] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C34301%2C1689139060229:(num 1689139060519) 2023-07-12 05:17:42,459 DEBUG [RS:3;jenkins-hbase20:34301] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,459 INFO [RS:3;jenkins-hbase20:34301] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,459 DEBUG [RS:1;jenkins-hbase20:38957] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs 2023-07-12 05:17:42,459 INFO [RS:1;jenkins-hbase20:38957] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C38957%2C1689139058554:(num 1689139059319) 2023-07-12 05:17:42,459 DEBUG [RS:1;jenkins-hbase20:38957] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,459 INFO [RS:3;jenkins-hbase20:34301] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:42,459 INFO [RS:1;jenkins-hbase20:38957] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,459 INFO [RS:1;jenkins-hbase20:38957] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:42,460 INFO [RS:3;jenkins-hbase20:34301] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:42,460 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:42,460 INFO [RS:3;jenkins-hbase20:34301] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:42,460 INFO [RS:3;jenkins-hbase20:34301] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:42,460 INFO [RS:1;jenkins-hbase20:38957] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:42,460 INFO [RS:1;jenkins-hbase20:38957] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:42,460 INFO [RS:1;jenkins-hbase20:38957] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:42,460 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:42,461 INFO [RS:3;jenkins-hbase20:34301] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:34301 2023-07-12 05:17:42,463 INFO [RS:1;jenkins-hbase20:38957] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38957 2023-07-12 05:17:42,466 DEBUG [RS:0;jenkins-hbase20:36541] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs 2023-07-12 05:17:42,466 INFO [RS:0;jenkins-hbase20:36541] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C36541%2C1689139058420:(num 1689139059320) 2023-07-12 05:17:42,466 DEBUG [RS:0;jenkins-hbase20:36541] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,467 INFO [RS:0;jenkins-hbase20:36541] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,467 INFO [RS:0;jenkins-hbase20:36541] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:42,467 INFO [RS:0;jenkins-hbase20:36541] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 05:17:42,467 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:42,467 INFO [RS:0;jenkins-hbase20:36541] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 05:17:42,467 INFO [RS:0;jenkins-hbase20:36541] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 05:17:42,468 INFO [RS:0;jenkins-hbase20:36541] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36541 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38957,1689139058554 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:42,469 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34301,1689139060229 2023-07-12 05:17:42,470 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:42,470 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:42,470 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:42,470 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36541,1689139058420 2023-07-12 05:17:42,471 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36541,1689139058420] 2023-07-12 05:17:42,471 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36541,1689139058420; numProcessing=1 2023-07-12 05:17:42,475 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,478 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,490 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,494 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/.tmp/info/19b39e6eb52b4a74b7695d1d5dea5d59 2023-07-12 05:17:42,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/.tmp/info/fa50903d52fa4cdeb49f10335ae85923 2023-07-12 05:17:42,499 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 19b39e6eb52b4a74b7695d1d5dea5d59 2023-07-12 05:17:42,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa50903d52fa4cdeb49f10335ae85923 2023-07-12 05:17:42,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/.tmp/info/fa50903d52fa4cdeb49f10335ae85923 as hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/info/fa50903d52fa4cdeb49f10335ae85923 2023-07-12 05:17:42,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fa50903d52fa4cdeb49f10335ae85923 2023-07-12 05:17:42,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/info/fa50903d52fa4cdeb49f10335ae85923, entries=3, sequenceid=9, filesize=5.0 K 2023-07-12 05:17:42,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for f545f95dd04fc91710fc223400fcc688 in 71ms, sequenceid=9, compaction requested=false 2023-07-12 05:17:42,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/namespace/f545f95dd04fc91710fc223400fcc688/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 05:17:42,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:42,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f545f95dd04fc91710fc223400fcc688: 2023-07-12 05:17:42,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689139059669.f545f95dd04fc91710fc223400fcc688. 2023-07-12 05:17:42,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ab3ab2de8bb806ac68ad6a5825a00149, disabling compactions & flushes 2023-07-12 05:17:42,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:42,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:42,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. after waiting 0 ms 2023-07-12 05:17:42,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:42,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing ab3ab2de8bb806ac68ad6a5825a00149 1/1 column families, dataSize=6.53 KB heapSize=10.82 KB 2023-07-12 05:17:42,539 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/.tmp/rep_barrier/2233da53c2b1459fbcd43a04c9d509b8 2023-07-12 05:17:42,547 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2233da53c2b1459fbcd43a04c9d509b8 2023-07-12 05:17:42,571 INFO [RS:1;jenkins-hbase20:38957] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38957,1689139058554; zookeeper connection closed. 2023-07-12 05:17:42,571 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:42,571 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:38957-0x1007f9d12c00002, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:42,578 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36541,1689139058420 already deleted, retry=false 2023-07-12 05:17:42,578 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36541,1689139058420 expired; onlineServers=3 2023-07-12 05:17:42,578 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,34301,1689139060229] 2023-07-12 05:17:42,578 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,34301,1689139060229; numProcessing=2 2023-07-12 05:17:42,583 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/.tmp/table/c25a25e539d5487fab6503d1c6a96b29 2023-07-12 05:17:42,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.53 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/.tmp/m/0f7a0f912f124327a5322434195f4ff2 2023-07-12 05:17:42,583 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@342f3084] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@342f3084 2023-07-12 05:17:42,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f7a0f912f124327a5322434195f4ff2 2023-07-12 05:17:42,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/.tmp/m/0f7a0f912f124327a5322434195f4ff2 as hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/m/0f7a0f912f124327a5322434195f4ff2 2023-07-12 05:17:42,593 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c25a25e539d5487fab6503d1c6a96b29 2023-07-12 05:17:42,594 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/.tmp/info/19b39e6eb52b4a74b7695d1d5dea5d59 as hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/info/19b39e6eb52b4a74b7695d1d5dea5d59 2023-07-12 05:17:42,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f7a0f912f124327a5322434195f4ff2 2023-07-12 05:17:42,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/m/0f7a0f912f124327a5322434195f4ff2, entries=12, sequenceid=29, filesize=5.5 K 2023-07-12 05:17:42,603 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 19b39e6eb52b4a74b7695d1d5dea5d59 2023-07-12 05:17:42,603 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/info/19b39e6eb52b4a74b7695d1d5dea5d59, entries=22, sequenceid=26, filesize=7.3 K 2023-07-12 05:17:42,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.53 KB/6685, heapSize ~10.80 KB/11064, currentSize=0 B/0 for ab3ab2de8bb806ac68ad6a5825a00149 in 67ms, sequenceid=29, compaction requested=false 2023-07-12 05:17:42,604 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/.tmp/rep_barrier/2233da53c2b1459fbcd43a04c9d509b8 as hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/rep_barrier/2233da53c2b1459fbcd43a04c9d509b8 2023-07-12 05:17:42,619 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2233da53c2b1459fbcd43a04c9d509b8 2023-07-12 05:17:42,619 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/rep_barrier/2233da53c2b1459fbcd43a04c9d509b8, entries=1, sequenceid=26, filesize=4.9 K 2023-07-12 05:17:42,620 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/.tmp/table/c25a25e539d5487fab6503d1c6a96b29 as hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/table/c25a25e539d5487fab6503d1c6a96b29 2023-07-12 05:17:42,622 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:42,622 INFO [RS:3;jenkins-hbase20:34301] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,34301,1689139060229; zookeeper connection closed. 2023-07-12 05:17:42,622 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:34301-0x1007f9d12c0000b, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:42,630 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56939654] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56939654 2023-07-12 05:17:42,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/rsgroup/ab3ab2de8bb806ac68ad6a5825a00149/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-12 05:17:42,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:42,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:42,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ab3ab2de8bb806ac68ad6a5825a00149: 2023-07-12 05:17:42,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689139059658.ab3ab2de8bb806ac68ad6a5825a00149. 2023-07-12 05:17:42,632 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c25a25e539d5487fab6503d1c6a96b29 2023-07-12 05:17:42,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/table/c25a25e539d5487fab6503d1c6a96b29, entries=6, sequenceid=26, filesize=5.1 K 2023-07-12 05:17:42,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4621, heapSize ~8.77 KB/8984, currentSize=0 B/0 for 1588230740 in 182ms, sequenceid=26, compaction requested=false 2023-07-12 05:17:42,644 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-12 05:17:42,645 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 05:17:42,645 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:42,645 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 05:17:42,645 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 05:17:42,651 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43053,1689139058705; all regions closed. 2023-07-12 05:17:42,655 DEBUG [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs 2023-07-12 05:17:42,655 INFO [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43053%2C1689139058705.meta:.meta(num 1689139059556) 2023-07-12 05:17:42,660 DEBUG [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/oldWALs 2023-07-12 05:17:42,660 INFO [RS:2;jenkins-hbase20:43053] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43053%2C1689139058705:(num 1689139059320) 2023-07-12 05:17:42,660 DEBUG [RS:2;jenkins-hbase20:43053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,660 INFO [RS:2;jenkins-hbase20:43053] regionserver.LeaseManager(133): Closed leases 2023-07-12 05:17:42,660 INFO [RS:2;jenkins-hbase20:43053] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 05:17:42,660 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:42,661 INFO [RS:2;jenkins-hbase20:43053] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43053 2023-07-12 05:17:42,672 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:42,672 INFO [RS:0;jenkins-hbase20:36541] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36541,1689139058420; zookeeper connection closed. 2023-07-12 05:17:42,672 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:36541-0x1007f9d12c00001, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:42,672 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@304f0909] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@304f0909 2023-07-12 05:17:42,673 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43053,1689139058705 2023-07-12 05:17:42,673 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 05:17:42,673 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,34301,1689139060229 already deleted, retry=false 2023-07-12 05:17:42,673 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,34301,1689139060229 expired; onlineServers=2 2023-07-12 05:17:42,673 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38957,1689139058554] 2023-07-12 05:17:42,673 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38957,1689139058554; numProcessing=3 2023-07-12 05:17:42,674 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38957,1689139058554 already deleted, retry=false 2023-07-12 05:17:42,674 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38957,1689139058554 expired; onlineServers=1 2023-07-12 05:17:42,674 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43053,1689139058705] 2023-07-12 05:17:42,674 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43053,1689139058705; numProcessing=4 2023-07-12 05:17:42,675 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43053,1689139058705 already deleted, retry=false 2023-07-12 05:17:42,675 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43053,1689139058705 expired; onlineServers=0 2023-07-12 05:17:42,675 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,46251,1689139058273' ***** 2023-07-12 05:17:42,675 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 05:17:42,676 DEBUG [M:0;jenkins-hbase20:46251] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c6f9832, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 05:17:42,676 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 05:17:42,678 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 05:17:42,678 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 05:17:42,679 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 05:17:42,679 INFO [M:0;jenkins-hbase20:46251] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@631c9e79{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 05:17:42,679 INFO [M:0;jenkins-hbase20:46251] server.AbstractConnector(383): Stopped ServerConnector@11bf68c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:42,680 INFO [M:0;jenkins-hbase20:46251] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 05:17:42,680 INFO [M:0;jenkins-hbase20:46251] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d75332a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 05:17:42,681 INFO [M:0;jenkins-hbase20:46251] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@216fc6c1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/hadoop.log.dir/,STOPPED} 2023-07-12 05:17:42,681 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46251,1689139058273 2023-07-12 05:17:42,681 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46251,1689139058273; all regions closed. 2023-07-12 05:17:42,681 DEBUG [M:0;jenkins-hbase20:46251] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 05:17:42,681 INFO [M:0;jenkins-hbase20:46251] master.HMaster(1491): Stopping master jetty server 2023-07-12 05:17:42,682 INFO [M:0;jenkins-hbase20:46251] server.AbstractConnector(383): Stopped ServerConnector@a9d7a9a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 05:17:42,682 DEBUG [M:0;jenkins-hbase20:46251] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 05:17:42,682 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 05:17:42,682 DEBUG [M:0;jenkins-hbase20:46251] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 05:17:42,682 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139059055] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689139059055,5,FailOnTimeoutGroup] 2023-07-12 05:17:42,682 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139059055] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689139059055,5,FailOnTimeoutGroup] 2023-07-12 05:17:42,682 INFO [M:0;jenkins-hbase20:46251] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 05:17:42,682 INFO [M:0;jenkins-hbase20:46251] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 05:17:42,683 INFO [M:0;jenkins-hbase20:46251] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-12 05:17:42,683 DEBUG [M:0;jenkins-hbase20:46251] master.HMaster(1512): Stopping service threads 2023-07-12 05:17:42,683 INFO [M:0;jenkins-hbase20:46251] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 05:17:42,683 ERROR [M:0;jenkins-hbase20:46251] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 05:17:42,683 INFO [M:0;jenkins-hbase20:46251] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 05:17:42,683 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 05:17:42,683 DEBUG [M:0;jenkins-hbase20:46251] zookeeper.ZKUtil(398): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 05:17:42,683 WARN [M:0;jenkins-hbase20:46251] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 05:17:42,683 INFO [M:0;jenkins-hbase20:46251] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 05:17:42,683 INFO [M:0;jenkins-hbase20:46251] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 05:17:42,684 DEBUG [M:0;jenkins-hbase20:46251] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 05:17:42,684 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:42,684 DEBUG [M:0;jenkins-hbase20:46251] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:42,684 DEBUG [M:0;jenkins-hbase20:46251] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 05:17:42,684 DEBUG [M:0;jenkins-hbase20:46251] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:42,684 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.29 KB heapSize=90.73 KB 2023-07-12 05:17:42,697 INFO [M:0;jenkins-hbase20:46251] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.29 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6aab33b54f774c448160c12ca4b8c8f6 2023-07-12 05:17:42,703 DEBUG [M:0;jenkins-hbase20:46251] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6aab33b54f774c448160c12ca4b8c8f6 as hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6aab33b54f774c448160c12ca4b8c8f6 2023-07-12 05:17:42,708 INFO [M:0;jenkins-hbase20:46251] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41411/user/jenkins/test-data/efc350e4-8df0-d53b-7fd0-15cb554484ad/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6aab33b54f774c448160c12ca4b8c8f6, entries=22, sequenceid=175, filesize=11.1 K 2023-07-12 05:17:42,708 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegion(2948): Finished flush of dataSize ~76.29 KB/78116, heapSize ~90.72 KB/92896, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=175, compaction requested=false 2023-07-12 05:17:42,710 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 05:17:42,710 DEBUG [M:0;jenkins-hbase20:46251] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 05:17:42,714 INFO [M:0;jenkins-hbase20:46251] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 05:17:42,714 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 05:17:42,714 INFO [M:0;jenkins-hbase20:46251] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46251 2023-07-12 05:17:42,715 DEBUG [M:0;jenkins-hbase20:46251] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,46251,1689139058273 already deleted, retry=false 2023-07-12 05:17:43,223 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:43,223 INFO [M:0;jenkins-hbase20:46251] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46251,1689139058273; zookeeper connection closed. 2023-07-12 05:17:43,224 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): master:46251-0x1007f9d12c00000, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:43,324 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:43,324 DEBUG [Listener at localhost.localdomain/37977-EventThread] zookeeper.ZKWatcher(600): regionserver:43053-0x1007f9d12c00003, quorum=127.0.0.1:55884, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 05:17:43,324 INFO [RS:2;jenkins-hbase20:43053] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43053,1689139058705; zookeeper connection closed. 2023-07-12 05:17:43,324 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@791a24e1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@791a24e1 2023-07-12 05:17:43,324 INFO [Listener at localhost.localdomain/37977] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 05:17:43,324 WARN [Listener at localhost.localdomain/37977] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:43,328 INFO [Listener at localhost.localdomain/37977] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:43,437 WARN [BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:43,438 WARN [BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1936111480-148.251.75.209-1689139057631 (Datanode Uuid 80f590bb-fdd1-4e5a-a664-741d79fe48c6) service to localhost.localdomain/127.0.0.1:41411 2023-07-12 05:17:43,438 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data5/current/BP-1936111480-148.251.75.209-1689139057631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:43,439 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data6/current/BP-1936111480-148.251.75.209-1689139057631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:43,440 WARN [Listener at localhost.localdomain/37977] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:43,446 INFO [Listener at localhost.localdomain/37977] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:43,551 WARN [BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:43,551 WARN [BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1936111480-148.251.75.209-1689139057631 (Datanode Uuid fe61c401-8b58-4495-ae04-8e2104f8c356) service to localhost.localdomain/127.0.0.1:41411 2023-07-12 05:17:43,553 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data3/current/BP-1936111480-148.251.75.209-1689139057631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:43,554 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data4/current/BP-1936111480-148.251.75.209-1689139057631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:43,555 WARN [Listener at localhost.localdomain/37977] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 05:17:43,560 INFO [Listener at localhost.localdomain/37977] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 05:17:43,664 WARN [BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 05:17:43,664 WARN [BP-1936111480-148.251.75.209-1689139057631 heartbeating to localhost.localdomain/127.0.0.1:41411] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1936111480-148.251.75.209-1689139057631 (Datanode Uuid 28680443-50bc-447e-9ebf-acd623957c7b) service to localhost.localdomain/127.0.0.1:41411 2023-07-12 05:17:43,665 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data1/current/BP-1936111480-148.251.75.209-1689139057631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:43,666 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/21913914-7827-7b3e-cf57-d8cd01d890ff/cluster_e6485d50-854e-d29b-459b-9ec933cfa37f/dfs/data/data2/current/BP-1936111480-148.251.75.209-1689139057631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 05:17:43,679 INFO [Listener at localhost.localdomain/37977] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-12 05:17:43,797 INFO [Listener at localhost.localdomain/37977] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 05:17:43,824 INFO [Listener at localhost.localdomain/37977] hbase.HBaseTestingUtility(1293): Minicluster is down