2023-07-21 15:15:28,164 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba 2023-07-21 15:15:28,178 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-21 15:15:28,195 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 15:15:28,195 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a, deleteOnExit=true 2023-07-21 15:15:28,196 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 15:15:28,197 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/test.cache.data in system properties and HBase conf 2023-07-21 15:15:28,197 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 15:15:28,198 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir in system properties and HBase conf 2023-07-21 15:15:28,198 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 15:15:28,199 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 15:15:28,199 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 15:15:28,312 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 15:15:28,767 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 15:15:28,778 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:15:28,779 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 15:15:28,780 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 15:15:28,788 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:15:28,791 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 15:15:28,792 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 15:15:28,792 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 15:15:28,793 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:15:28,794 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 15:15:28,794 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/nfs.dump.dir in system properties and HBase conf 2023-07-21 15:15:28,794 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir in system properties and HBase conf 2023-07-21 15:15:28,795 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 15:15:28,796 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 15:15:28,796 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 15:15:29,416 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:15:29,421 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:15:29,762 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 15:15:30,055 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 15:15:30,079 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:15:30,129 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:15:30,173 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/Jetty_localhost_localdomain_34857_hdfs____mhqtv7/webapp 2023-07-21 15:15:30,340 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34857 2023-07-21 15:15:30,349 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 15:15:30,349 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 15:15:30,879 WARN [Listener at localhost.localdomain/37247] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:15:30,951 WARN [Listener at localhost.localdomain/37247] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:15:30,970 WARN [Listener at localhost.localdomain/37247] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:15:30,978 INFO [Listener at localhost.localdomain/37247] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:15:30,984 INFO [Listener at localhost.localdomain/37247] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/Jetty_localhost_37729_datanode____.p8rxvj/webapp 2023-07-21 15:15:31,101 INFO [Listener at localhost.localdomain/37247] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37729 2023-07-21 15:15:31,512 WARN [Listener at localhost.localdomain/45077] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:15:31,590 WARN [Listener at localhost.localdomain/45077] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:15:31,594 WARN [Listener at localhost.localdomain/45077] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:15:31,597 INFO [Listener at localhost.localdomain/45077] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:15:31,604 INFO [Listener at localhost.localdomain/45077] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/Jetty_localhost_42071_datanode____3wk71v/webapp 2023-07-21 15:15:31,712 INFO [Listener at localhost.localdomain/45077] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42071 2023-07-21 15:15:31,780 WARN [Listener at localhost.localdomain/34537] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:15:31,864 WARN [Listener at localhost.localdomain/34537] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 15:15:31,868 WARN [Listener at localhost.localdomain/34537] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 15:15:31,871 INFO [Listener at localhost.localdomain/34537] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 15:15:31,884 INFO [Listener at localhost.localdomain/34537] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/Jetty_localhost_46065_datanode____u43gyc/webapp 2023-07-21 15:15:32,033 INFO [Listener at localhost.localdomain/34537] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46065 2023-07-21 15:15:32,068 WARN [Listener at localhost.localdomain/38883] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 15:15:32,210 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x61da814a4bad9a81: Processing first storage report for DS-ec97d673-8164-46e0-a29f-1cd213b16f56 from datanode d7cc208c-0cc0-44f3-b6d2-9546a365644e 2023-07-21 15:15:32,212 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x61da814a4bad9a81: from storage DS-ec97d673-8164-46e0-a29f-1cd213b16f56 node DatanodeRegistration(127.0.0.1:35415, datanodeUuid=d7cc208c-0cc0-44f3-b6d2-9546a365644e, infoPort=32823, infoSecurePort=0, ipcPort=45077, storageInfo=lv=-57;cid=testClusterID;nsid=565606698;c=1689952529515), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-21 15:15:32,212 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x28193fb7854ef435: Processing first storage report for DS-779658b6-4e98-4970-b3d3-fb613cb8802e from datanode 46efc94c-862b-40f5-85ed-c5871b5b137b 2023-07-21 15:15:32,213 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x28193fb7854ef435: from storage DS-779658b6-4e98-4970-b3d3-fb613cb8802e node DatanodeRegistration(127.0.0.1:36409, datanodeUuid=46efc94c-862b-40f5-85ed-c5871b5b137b, infoPort=43613, infoSecurePort=0, ipcPort=34537, storageInfo=lv=-57;cid=testClusterID;nsid=565606698;c=1689952529515), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:15:32,213 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x61da814a4bad9a81: Processing first storage report for DS-cf51865e-81dc-46a9-ace4-0d1f832c198a from datanode d7cc208c-0cc0-44f3-b6d2-9546a365644e 2023-07-21 15:15:32,213 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x61da814a4bad9a81: from storage DS-cf51865e-81dc-46a9-ace4-0d1f832c198a node DatanodeRegistration(127.0.0.1:35415, datanodeUuid=d7cc208c-0cc0-44f3-b6d2-9546a365644e, infoPort=32823, infoSecurePort=0, ipcPort=45077, storageInfo=lv=-57;cid=testClusterID;nsid=565606698;c=1689952529515), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:15:32,213 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x28193fb7854ef435: Processing first storage report for DS-24e5981a-688c-4148-9bd4-a5dad0bda6ae from datanode 46efc94c-862b-40f5-85ed-c5871b5b137b 2023-07-21 15:15:32,214 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x28193fb7854ef435: from storage DS-24e5981a-688c-4148-9bd4-a5dad0bda6ae node DatanodeRegistration(127.0.0.1:36409, datanodeUuid=46efc94c-862b-40f5-85ed-c5871b5b137b, infoPort=43613, infoSecurePort=0, ipcPort=34537, storageInfo=lv=-57;cid=testClusterID;nsid=565606698;c=1689952529515), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:15:32,249 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xde2d040ffa3e47d4: Processing first storage report for DS-3c205c17-2c52-402b-866d-d32f13caa455 from datanode e1a1719c-0d63-4736-b78e-7293476d32dc 2023-07-21 15:15:32,250 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xde2d040ffa3e47d4: from storage DS-3c205c17-2c52-402b-866d-d32f13caa455 node DatanodeRegistration(127.0.0.1:46483, datanodeUuid=e1a1719c-0d63-4736-b78e-7293476d32dc, infoPort=40877, infoSecurePort=0, ipcPort=38883, storageInfo=lv=-57;cid=testClusterID;nsid=565606698;c=1689952529515), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:15:32,250 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xde2d040ffa3e47d4: Processing first storage report for DS-e520054d-1a0b-4436-87f1-66c282cf4b55 from datanode e1a1719c-0d63-4736-b78e-7293476d32dc 2023-07-21 15:15:32,250 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xde2d040ffa3e47d4: from storage DS-e520054d-1a0b-4436-87f1-66c282cf4b55 node DatanodeRegistration(127.0.0.1:46483, datanodeUuid=e1a1719c-0d63-4736-b78e-7293476d32dc, infoPort=40877, infoSecurePort=0, ipcPort=38883, storageInfo=lv=-57;cid=testClusterID;nsid=565606698;c=1689952529515), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 15:15:32,515 DEBUG [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba 2023-07-21 15:15:32,596 INFO [Listener at localhost.localdomain/38883] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/zookeeper_0, clientPort=62052, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 15:15:32,626 INFO [Listener at localhost.localdomain/38883] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62052 2023-07-21 15:15:32,638 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:32,641 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:33,445 INFO [Listener at localhost.localdomain/38883] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 with version=8 2023-07-21 15:15:33,445 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/hbase-staging 2023-07-21 15:15:33,459 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 15:15:33,460 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 15:15:33,460 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 15:15:33,460 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 15:15:33,850 INFO [Listener at localhost.localdomain/38883] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 15:15:34,615 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:34,674 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:34,675 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:34,675 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:34,675 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:34,676 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:34,989 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:35,101 DEBUG [Listener at localhost.localdomain/38883] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 15:15:35,234 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43019 2023-07-21 15:15:35,248 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:35,252 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:35,274 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43019 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:35,353 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:430190x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:35,362 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43019-0x1018872b3790000 connected 2023-07-21 15:15:35,407 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:35,409 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:35,415 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:35,440 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43019 2023-07-21 15:15:35,441 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43019 2023-07-21 15:15:35,442 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43019 2023-07-21 15:15:35,443 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43019 2023-07-21 15:15:35,446 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43019 2023-07-21 15:15:35,505 INFO [Listener at localhost.localdomain/38883] log.Log(170): Logging initialized @8316ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 15:15:35,660 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:35,661 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:35,662 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:35,663 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 15:15:35,663 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:35,664 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:35,667 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:35,727 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 39551 2023-07-21 15:15:35,729 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:35,767 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:35,771 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@11c58ab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:35,772 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:35,773 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3dbf6867{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:35,983 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:36,002 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:36,002 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:36,006 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:15:36,016 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,050 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@276e9a5b{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-39551-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3240209289153899909/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:15:36,066 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@4c452b0b{HTTP/1.1, (http/1.1)}{0.0.0.0:39551} 2023-07-21 15:15:36,066 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @8877ms 2023-07-21 15:15:36,071 INFO [Listener at localhost.localdomain/38883] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3, hbase.cluster.distributed=false 2023-07-21 15:15:36,168 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:36,168 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,168 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,169 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:36,169 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,169 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:36,177 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:36,182 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33925 2023-07-21 15:15:36,185 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:36,200 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:36,201 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,204 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,207 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33925 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:36,215 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:339250x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:36,217 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33925-0x1018872b3790001 connected 2023-07-21 15:15:36,222 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:36,229 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:36,230 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:36,231 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33925 2023-07-21 15:15:36,232 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33925 2023-07-21 15:15:36,232 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33925 2023-07-21 15:15:36,232 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33925 2023-07-21 15:15:36,236 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33925 2023-07-21 15:15:36,239 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:36,240 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:36,240 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:36,241 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:36,242 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:36,242 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:36,242 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:36,243 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 34741 2023-07-21 15:15:36,244 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:36,256 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,256 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50350a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:36,256 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,257 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4de48637{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:36,393 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:36,395 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:36,395 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:36,396 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:15:36,397 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,402 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6bd6e5db{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-34741-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5652341109292010800/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:36,403 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@51871d5f{HTTP/1.1, (http/1.1)}{0.0.0.0:34741} 2023-07-21 15:15:36,403 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @9214ms 2023-07-21 15:15:36,414 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:36,414 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,415 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,415 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:36,415 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,415 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:36,415 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:36,417 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38527 2023-07-21 15:15:36,418 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:36,419 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:36,420 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,421 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,423 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38527 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:36,426 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:385270x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:36,428 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:385270x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:36,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38527-0x1018872b3790002 connected 2023-07-21 15:15:36,429 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:36,429 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:36,430 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38527 2023-07-21 15:15:36,430 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38527 2023-07-21 15:15:36,430 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38527 2023-07-21 15:15:36,435 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38527 2023-07-21 15:15:36,436 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38527 2023-07-21 15:15:36,438 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:36,438 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:36,438 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:36,439 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:36,439 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:36,439 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:36,440 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:36,440 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 44259 2023-07-21 15:15:36,440 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:36,445 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,445 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1dd83568{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:36,446 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,446 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@66f1a447{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:36,571 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:36,573 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:36,574 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:36,574 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:15:36,576 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,577 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@47469f82{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-44259-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8466773508050487177/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:36,579 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@4876c5d5{HTTP/1.1, (http/1.1)}{0.0.0.0:44259} 2023-07-21 15:15:36,579 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @9390ms 2023-07-21 15:15:36,597 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:36,597 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,598 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,598 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:36,598 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:36,598 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:36,599 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:36,601 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36355 2023-07-21 15:15:36,601 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:36,604 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:36,605 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,607 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,609 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36355 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:36,618 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:363550x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:36,620 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:363550x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:36,620 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36355-0x1018872b3790003 connected 2023-07-21 15:15:36,621 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:36,622 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:36,628 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36355 2023-07-21 15:15:36,629 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36355 2023-07-21 15:15:36,629 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36355 2023-07-21 15:15:36,630 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36355 2023-07-21 15:15:36,630 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36355 2023-07-21 15:15:36,633 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:36,633 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:36,634 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:36,634 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:36,634 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:36,635 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:36,635 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:36,636 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 35957 2023-07-21 15:15:36,636 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:36,641 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,641 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f3c4bd9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:36,642 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,642 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33c3bc88{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:36,755 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:36,756 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:36,756 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:36,757 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:15:36,758 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:36,759 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4737c079{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-35957-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2742962603897669186/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:36,761 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@554c4191{HTTP/1.1, (http/1.1)}{0.0.0.0:35957} 2023-07-21 15:15:36,761 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @9572ms 2023-07-21 15:15:36,767 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:36,770 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@50f45965{HTTP/1.1, (http/1.1)}{0.0.0.0:36095} 2023-07-21 15:15:36,771 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @9582ms 2023-07-21 15:15:36,771 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:36,780 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:15:36,782 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:36,798 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:15:36,798 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:15:36,798 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:15:36,798 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:15:36,800 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:36,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:15:36,801 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,43019,1689952533620 from backup master directory 2023-07-21 15:15:36,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:15:36,804 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:36,805 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:15:36,805 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:15:36,806 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:36,809 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 15:15:36,810 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 15:15:36,906 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/hbase.id with ID: efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:15:36,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:36,968 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:37,037 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5734e33a to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:37,059 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3154e359, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:37,079 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:37,081 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 15:15:37,098 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 15:15:37,098 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 15:15:37,100 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:15:37,104 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:15:37,105 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:37,141 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store-tmp 2023-07-21 15:15:37,179 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:37,179 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:15:37,179 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:37,179 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:37,180 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:15:37,180 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:37,180 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:37,180 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:15:37,181 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:37,202 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43019%2C1689952533620, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/oldWALs, maxLogs=10 2023-07-21 15:15:37,258 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:37,258 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:37,258 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:37,269 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:15:37,334 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 2023-07-21 15:15:37,335 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:15:37,336 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:37,336 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:37,340 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:15:37,342 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:15:37,404 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:15:37,411 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 15:15:37,441 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 15:15:37,452 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:37,457 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:15:37,459 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:15:37,474 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:15:37,478 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:37,479 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9934795360, jitterRate=-0.07475008070468903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:37,479 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:15:37,480 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 15:15:37,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 15:15:37,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 15:15:37,504 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 15:15:37,506 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 15:15:37,554 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 48 msec 2023-07-21 15:15:37,554 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 15:15:37,581 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 15:15:37,587 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 15:15:37,597 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 15:15:37,604 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 15:15:37,610 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 15:15:37,612 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:37,613 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 15:15:37,613 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 15:15:37,629 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 15:15:37,635 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:37,635 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:37,635 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:37,635 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:37,635 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:37,636 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,43019,1689952533620, sessionid=0x1018872b3790000, setting cluster-up flag (Was=false) 2023-07-21 15:15:37,654 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:37,657 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 15:15:37,659 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:37,664 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:37,668 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 15:15:37,669 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:37,671 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/.tmp 2023-07-21 15:15:37,733 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 15:15:37,742 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 15:15:37,743 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:37,745 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 15:15:37,745 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 15:15:37,765 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:15:37,765 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:15:37,768 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:15:37,778 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:15:37,778 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:15:37,778 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:15:37,786 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:15:37,786 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:15:37,786 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:15:37,786 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:15:37,786 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:15:37,786 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:15:37,789 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:15:37,789 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:15:37,789 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:15:37,792 DEBUG [RS:0;jenkins-hbase17:33925] zookeeper.ReadOnlyZKClient(139): Connect 0x4dd2e2fc to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:37,792 DEBUG [RS:2;jenkins-hbase17:36355] zookeeper.ReadOnlyZKClient(139): Connect 0x4786024c to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:37,792 DEBUG [RS:1;jenkins-hbase17:38527] zookeeper.ReadOnlyZKClient(139): Connect 0x01825c0b to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:37,812 DEBUG [RS:0;jenkins-hbase17:33925] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d4a47cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:37,813 DEBUG [RS:2;jenkins-hbase17:36355] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28484d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:37,812 DEBUG [RS:1;jenkins-hbase17:38527] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6bb5ab9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:37,816 DEBUG [RS:0;jenkins-hbase17:33925] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6aa63a37, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:15:37,816 DEBUG [RS:2;jenkins-hbase17:36355] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c8d3a3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:15:37,816 DEBUG [RS:1;jenkins-hbase17:38527] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75f3d4e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:15:37,840 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:36355 2023-07-21 15:15:37,841 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:33925 2023-07-21 15:15:37,844 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:38527 2023-07-21 15:15:37,848 INFO [RS:1;jenkins-hbase17:38527] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:15:37,852 INFO [RS:1;jenkins-hbase17:38527] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:15:37,852 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:15:37,848 INFO [RS:2;jenkins-hbase17:36355] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:15:37,853 INFO [RS:2;jenkins-hbase17:36355] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:15:37,848 INFO [RS:0;jenkins-hbase17:33925] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:15:37,853 INFO [RS:0;jenkins-hbase17:33925] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:15:37,853 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:15:37,853 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:15:37,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 15:15:37,856 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:38527, startcode=1689952536414 2023-07-21 15:15:37,856 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:33925, startcode=1689952536167 2023-07-21 15:15:37,856 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:36355, startcode=1689952536596 2023-07-21 15:15:37,889 DEBUG [RS:1;jenkins-hbase17:38527] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:15:37,893 DEBUG [RS:2;jenkins-hbase17:36355] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:15:37,889 DEBUG [RS:0;jenkins-hbase17:33925] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:15:37,938 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:15:37,946 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:15:37,947 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:15:37,948 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:15:37,950 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:15:37,950 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:15:37,950 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:15:37,950 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:15:37,951 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 15:15:37,951 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:37,951 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:15:37,951 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:37,961 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689952567961 2023-07-21 15:15:37,962 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34017, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:15:37,965 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 15:15:37,963 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38605, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:15:37,962 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45143, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:15:37,975 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 15:15:37,978 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:37,980 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:15:37,981 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 15:15:37,984 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 15:15:37,984 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 15:15:37,985 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 15:15:37,985 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 15:15:37,986 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:37,987 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:37,988 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 15:15:37,991 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:37,992 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 15:15:37,993 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:37,993 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 15:15:38,001 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 15:15:38,001 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 15:15:38,008 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952538004,5,FailOnTimeoutGroup] 2023-07-21 15:15:38,008 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952538008,5,FailOnTimeoutGroup] 2023-07-21 15:15:38,009 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,009 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 15:15:38,010 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,011 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,023 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 15:15:38,023 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 15:15:38,023 WARN [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 15:15:38,023 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 15:15:38,023 WARN [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 15:15:38,023 WARN [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 15:15:38,077 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:38,078 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:38,079 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:15:38,109 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:38,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:15:38,116 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:15:38,117 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:15:38,118 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:38,118 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:15:38,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:15:38,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:15:38,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:38,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:15:38,124 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:38527, startcode=1689952536414 2023-07-21 15:15:38,125 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:33925, startcode=1689952536167 2023-07-21 15:15:38,132 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:36355, startcode=1689952536596 2023-07-21 15:15:38,134 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,140 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:38,141 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,142 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:15:38,142 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:38,142 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 15:15:38,143 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,144 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:15:38,144 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:15:38,144 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39551 2023-07-21 15:15:38,148 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:15:38,152 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:38,152 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:15:38,152 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:15:38,153 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:15:38,157 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39551 2023-07-21 15:15:38,157 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:15:38,158 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:15:38,158 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:15:38,158 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39551 2023-07-21 15:15:38,164 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:38,168 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:38,174 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:15:38,175 DEBUG [RS:1;jenkins-hbase17:38527] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,175 WARN [RS:1;jenkins-hbase17:38527] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:15:38,175 DEBUG [RS:0;jenkins-hbase17:33925] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,175 INFO [RS:1;jenkins-hbase17:38527] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:38,176 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,176 DEBUG [RS:2;jenkins-hbase17:36355] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,178 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36355,1689952536596] 2023-07-21 15:15:38,179 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38527,1689952536414] 2023-07-21 15:15:38,179 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33925,1689952536167] 2023-07-21 15:15:38,176 WARN [RS:0;jenkins-hbase17:33925] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:15:38,176 WARN [RS:2;jenkins-hbase17:36355] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:15:38,184 INFO [RS:2;jenkins-hbase17:36355] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:38,185 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,184 INFO [RS:0;jenkins-hbase17:33925] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:38,187 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,188 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:15:38,196 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:15:38,199 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:15:38,224 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:38,227 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10218064960, jitterRate=-0.04836854338645935}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:15:38,227 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:15:38,227 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:15:38,227 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:15:38,227 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:15:38,227 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:15:38,227 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:15:38,232 DEBUG [RS:1;jenkins-hbase17:38527] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,232 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:15:38,233 DEBUG [RS:1;jenkins-hbase17:38527] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,233 DEBUG [RS:2;jenkins-hbase17:36355] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,233 DEBUG [RS:0;jenkins-hbase17:33925] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,233 DEBUG [RS:1;jenkins-hbase17:38527] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,233 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:15:38,237 DEBUG [RS:0;jenkins-hbase17:33925] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,237 DEBUG [RS:0;jenkins-hbase17:33925] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,239 DEBUG [RS:2;jenkins-hbase17:36355] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,239 DEBUG [RS:2;jenkins-hbase17:36355] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,242 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 15:15:38,242 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 15:15:38,252 DEBUG [RS:2;jenkins-hbase17:36355] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:15:38,252 DEBUG [RS:0;jenkins-hbase17:33925] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:15:38,252 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:15:38,254 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 15:15:38,268 INFO [RS:1;jenkins-hbase17:38527] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:15:38,270 INFO [RS:2;jenkins-hbase17:36355] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:15:38,268 INFO [RS:0;jenkins-hbase17:33925] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:15:38,313 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 15:15:38,314 INFO [RS:2;jenkins-hbase17:36355] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:15:38,315 INFO [RS:0;jenkins-hbase17:33925] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:15:38,318 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 15:15:38,323 INFO [RS:1;jenkins-hbase17:38527] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:15:38,359 INFO [RS:1;jenkins-hbase17:38527] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:15:38,359 INFO [RS:0;jenkins-hbase17:33925] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:15:38,359 INFO [RS:2;jenkins-hbase17:36355] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:15:38,361 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,360 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,360 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,362 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:15:38,363 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:15:38,365 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:15:38,380 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,381 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,381 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,382 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:15:38,382 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,382 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,383 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:15:38,383 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,383 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,383 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,383 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,383 DEBUG [RS:0;jenkins-hbase17:33925] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,383 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,387 DEBUG [RS:1;jenkins-hbase17:38527] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,387 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,391 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,391 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,394 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,394 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,394 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,394 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,395 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:15:38,395 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,395 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,395 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,396 DEBUG [RS:2;jenkins-hbase17:36355] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:38,400 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,400 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,404 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,404 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,404 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,427 INFO [RS:1;jenkins-hbase17:38527] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:15:38,431 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38527,1689952536414-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,438 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,438 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,438 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,447 INFO [RS:0;jenkins-hbase17:33925] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:15:38,456 INFO [RS:2;jenkins-hbase17:36355] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:15:38,464 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33925,1689952536167-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,468 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36355,1689952536596-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:38,472 INFO [RS:1;jenkins-hbase17:38527] regionserver.Replication(203): jenkins-hbase17.apache.org,38527,1689952536414 started 2023-07-21 15:15:38,472 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38527,1689952536414, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38527, sessionid=0x1018872b3790002 2023-07-21 15:15:38,476 DEBUG [jenkins-hbase17:43019] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:15:38,476 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:15:38,476 DEBUG [RS:1;jenkins-hbase17:38527] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,476 DEBUG [RS:1;jenkins-hbase17:38527] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38527,1689952536414' 2023-07-21 15:15:38,477 DEBUG [RS:1;jenkins-hbase17:38527] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:15:38,482 DEBUG [RS:1;jenkins-hbase17:38527] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:15:38,483 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:15:38,483 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:15:38,483 DEBUG [RS:1;jenkins-hbase17:38527] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,483 DEBUG [RS:1;jenkins-hbase17:38527] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38527,1689952536414' 2023-07-21 15:15:38,483 DEBUG [RS:1;jenkins-hbase17:38527] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:38,485 DEBUG [RS:1;jenkins-hbase17:38527] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:38,486 DEBUG [RS:1;jenkins-hbase17:38527] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:15:38,486 INFO [RS:1;jenkins-hbase17:38527] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:15:38,486 INFO [RS:1;jenkins-hbase17:38527] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:15:38,493 DEBUG [jenkins-hbase17:43019] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:38,495 DEBUG [jenkins-hbase17:43019] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:38,495 DEBUG [jenkins-hbase17:43019] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:38,495 DEBUG [jenkins-hbase17:43019] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:38,495 DEBUG [jenkins-hbase17:43019] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:38,496 INFO [RS:2;jenkins-hbase17:36355] regionserver.Replication(203): jenkins-hbase17.apache.org,36355,1689952536596 started 2023-07-21 15:15:38,496 INFO [RS:0;jenkins-hbase17:33925] regionserver.Replication(203): jenkins-hbase17.apache.org,33925,1689952536167 started 2023-07-21 15:15:38,496 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36355,1689952536596, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36355, sessionid=0x1018872b3790003 2023-07-21 15:15:38,496 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33925,1689952536167, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33925, sessionid=0x1018872b3790001 2023-07-21 15:15:38,497 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:15:38,497 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:15:38,497 DEBUG [RS:2;jenkins-hbase17:36355] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,497 DEBUG [RS:0;jenkins-hbase17:33925] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,498 DEBUG [RS:2;jenkins-hbase17:36355] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36355,1689952536596' 2023-07-21 15:15:38,499 DEBUG [RS:0;jenkins-hbase17:33925] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33925,1689952536167' 2023-07-21 15:15:38,499 DEBUG [RS:0;jenkins-hbase17:33925] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:15:38,499 DEBUG [RS:2;jenkins-hbase17:36355] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:15:38,500 DEBUG [RS:2;jenkins-hbase17:36355] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:15:38,500 DEBUG [RS:0;jenkins-hbase17:33925] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:15:38,500 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:15:38,500 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:15:38,500 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:15:38,500 DEBUG [RS:2;jenkins-hbase17:36355] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:38,500 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:15:38,500 DEBUG [RS:2;jenkins-hbase17:36355] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36355,1689952536596' 2023-07-21 15:15:38,500 DEBUG [RS:0;jenkins-hbase17:33925] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:38,501 DEBUG [RS:2;jenkins-hbase17:36355] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:38,501 DEBUG [RS:0;jenkins-hbase17:33925] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33925,1689952536167' 2023-07-21 15:15:38,501 DEBUG [RS:0;jenkins-hbase17:33925] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:38,501 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38527,1689952536414, state=OPENING 2023-07-21 15:15:38,501 DEBUG [RS:2;jenkins-hbase17:36355] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:38,502 DEBUG [RS:0;jenkins-hbase17:33925] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:38,502 DEBUG [RS:0;jenkins-hbase17:33925] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:15:38,502 INFO [RS:0;jenkins-hbase17:33925] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:15:38,502 INFO [RS:0;jenkins-hbase17:33925] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:15:38,507 DEBUG [RS:2;jenkins-hbase17:36355] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:15:38,507 INFO [RS:2;jenkins-hbase17:36355] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:15:38,508 INFO [RS:2;jenkins-hbase17:36355] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:15:38,508 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 15:15:38,509 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:38,509 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:15:38,513 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:38,553 WARN [ReadOnlyZKClient-127.0.0.1:62052@0x5734e33a] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 15:15:38,590 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:38,612 INFO [RS:2;jenkins-hbase17:36355] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36355%2C1689952536596, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:15:38,614 INFO [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38527%2C1689952536414, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:15:38,617 INFO [RS:0;jenkins-hbase17:33925] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33925%2C1689952536167, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:15:38,617 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:38,621 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38527] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:55776 deadline: 1689952598618, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,658 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:38,658 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:38,682 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:38,696 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:38,696 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:38,696 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:38,711 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:38,711 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:38,712 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:38,739 INFO [RS:0;jenkins-hbase17:33925] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167/jenkins-hbase17.apache.org%2C33925%2C1689952536167.1689952538620 2023-07-21 15:15:38,740 INFO [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414/jenkins-hbase17.apache.org%2C38527%2C1689952536414.1689952538620 2023-07-21 15:15:38,740 DEBUG [RS:0;jenkins-hbase17:33925] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:15:38,743 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:38,748 DEBUG [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:15:38,748 INFO [RS:2;jenkins-hbase17:36355] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596/jenkins-hbase17.apache.org%2C36355%2C1689952536596.1689952538616 2023-07-21 15:15:38,753 DEBUG [RS:2;jenkins-hbase17:36355] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:15:38,758 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:38,765 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:38,790 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:15:38,791 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:38,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38527%2C1689952536414.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:15:38,830 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:38,848 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:38,849 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:38,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414/jenkins-hbase17.apache.org%2C38527%2C1689952536414.meta.1689952538803.meta 2023-07-21 15:15:38,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:15:38,865 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:38,866 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:15:38,871 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:15:38,875 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:15:38,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:15:38,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:38,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:15:38,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:15:38,908 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:15:38,911 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:15:38,911 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:15:38,912 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:15:38,913 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:38,913 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:15:38,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:15:38,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:15:38,916 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:15:38,918 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:38,918 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:15:38,920 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:15:38,920 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:15:38,921 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:15:38,922 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:38,923 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:15:38,929 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:15:38,934 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:15:38,937 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:15:38,939 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9856236960, jitterRate=-0.08206640183925629}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:15:38,939 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:15:38,959 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689952538736 2023-07-21 15:15:38,989 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:15:38,990 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:15:38,993 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,38527,1689952536414, state=OPEN 2023-07-21 15:15:38,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:15:38,997 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:15:39,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 15:15:39,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,38527,1689952536414 in 484 msec 2023-07-21 15:15:39,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 15:15:39,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 769 msec 2023-07-21 15:15:39,060 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.2920 sec 2023-07-21 15:15:39,060 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689952539060, completionTime=-1 2023-07-21 15:15:39,060 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 15:15:39,060 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 15:15:39,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 15:15:39,160 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689952599160 2023-07-21 15:15:39,161 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689952659160 2023-07-21 15:15:39,161 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 100 msec 2023-07-21 15:15:39,161 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:39,176 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:15:39,177 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:15:39,192 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43019,1689952533620-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:39,192 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43019,1689952533620-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:39,193 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:39,193 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43019,1689952533620-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:39,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:43019, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:39,197 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:39,200 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:39,217 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 15:15:39,223 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,227 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 empty. 2023-07-21 15:15:39,228 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,228 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 15:15:39,232 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 15:15:39,232 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:39,237 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 15:15:39,243 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:39,245 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:39,251 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,252 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/namespace/7697a92683cfac49519e4a4111355983 empty. 2023-07-21 15:15:39,261 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,261 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 15:15:39,316 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:39,321 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 603dc738ccec189e3bde34ff84c46389, NAME => 'hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:39,331 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:39,337 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7697a92683cfac49519e4a4111355983, NAME => 'hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:39,409 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:39,410 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 603dc738ccec189e3bde34ff84c46389, disabling compactions & flushes 2023-07-21 15:15:39,410 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,410 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,410 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. after waiting 0 ms 2023-07-21 15:15:39,410 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,410 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,410 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:15:39,411 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:39,412 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7697a92683cfac49519e4a4111355983, disabling compactions & flushes 2023-07-21 15:15:39,413 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,413 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,413 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. after waiting 0 ms 2023-07-21 15:15:39,413 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,413 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,413 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:15:39,424 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:39,426 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:39,456 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952539427"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952539427"}]},"ts":"1689952539427"} 2023-07-21 15:15:39,456 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952539428"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952539428"}]},"ts":"1689952539428"} 2023-07-21 15:15:39,505 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:39,513 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:39,513 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:39,517 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:39,522 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952539517"}]},"ts":"1689952539517"} 2023-07-21 15:15:39,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952539514"}]},"ts":"1689952539514"} 2023-07-21 15:15:39,536 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 15:15:39,537 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 15:15:39,540 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:39,540 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:39,540 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:39,540 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:39,540 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:39,542 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:39,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:39,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:39,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:39,543 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:39,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN}] 2023-07-21 15:15:39,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN}] 2023-07-21 15:15:39,548 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN 2023-07-21 15:15:39,549 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN 2023-07-21 15:15:39,552 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:39,553 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,33925,1689952536167; forceNewPlan=false, retain=false 2023-07-21 15:15:39,554 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 15:15:39,558 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:39,558 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:39,558 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952539557"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952539557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952539557"}]},"ts":"1689952539557"} 2023-07-21 15:15:39,558 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952539557"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952539557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952539557"}]},"ts":"1689952539557"} 2023-07-21 15:15:39,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,33925,1689952536167}] 2023-07-21 15:15:39,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:39,716 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:39,717 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:39,721 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:39,722 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:39,722 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35212, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:39,726 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56128, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:39,731 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,731 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7697a92683cfac49519e4a4111355983, NAME => 'hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:39,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,733 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:39,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 603dc738ccec189e3bde34ff84c46389, NAME => 'hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:39,733 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,734 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:15:39,734 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. service=MultiRowMutationService 2023-07-21 15:15:39,735 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:15:39,735 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,735 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:39,735 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,735 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,738 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,743 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,747 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:15:39,747 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:15:39,747 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:15:39,748 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:15:39,748 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 603dc738ccec189e3bde34ff84c46389 columnFamilyName m 2023-07-21 15:15:39,748 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7697a92683cfac49519e4a4111355983 columnFamilyName info 2023-07-21 15:15:39,749 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(310): Store=603dc738ccec189e3bde34ff84c46389/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:39,751 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(310): Store=7697a92683cfac49519e4a4111355983/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:39,757 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,760 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,767 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:39,767 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:39,781 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:39,782 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7697a92683cfac49519e4a4111355983; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11403453920, jitterRate=0.0620294064283371}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:39,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:39,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:15:39,785 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 603dc738ccec189e3bde34ff84c46389; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3fa69ea6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:39,785 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:15:39,786 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., pid=8, masterSystemTime=1689952539716 2023-07-21 15:15:39,793 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., pid=9, masterSystemTime=1689952539721 2023-07-21 15:15:39,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,803 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:39,806 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:39,806 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952539805"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952539805"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952539805"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952539805"}]},"ts":"1689952539805"} 2023-07-21 15:15:39,808 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,809 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:39,811 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:39,812 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952539811"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952539811"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952539811"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952539811"}]},"ts":"1689952539811"} 2023-07-21 15:15:39,819 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 15:15:39,819 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,36355,1689952536596 in 247 msec 2023-07-21 15:15:39,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-21 15:15:39,822 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,33925,1689952536167 in 255 msec 2023-07-21 15:15:39,824 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-21 15:15:39,825 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN in 275 msec 2023-07-21 15:15:39,825 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 15:15:39,825 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN in 277 msec 2023-07-21 15:15:39,826 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:39,826 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952539826"}]},"ts":"1689952539826"} 2023-07-21 15:15:39,827 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:39,827 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952539827"}]},"ts":"1689952539827"} 2023-07-21 15:15:39,829 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 15:15:39,831 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 15:15:39,832 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:39,836 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:39,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 670 msec 2023-07-21 15:15:39,844 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 15:15:39,846 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:39,846 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:39,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 604 msec 2023-07-21 15:15:39,879 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:39,886 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35226, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:39,894 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:39,897 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:39,901 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 15:15:39,901 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 15:15:39,911 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 15:15:39,935 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:39,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 48 msec 2023-07-21 15:15:39,947 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:15:39,974 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:39,984 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 35 msec 2023-07-21 15:15:39,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:15:39,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:39,997 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:39,999 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 15:15:39,999 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.193sec 2023-07-21 15:15:40,002 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 15:15:40,004 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 15:15:40,004 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 15:15:40,006 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43019,1689952533620-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 15:15:40,007 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:15:40,007 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43019,1689952533620-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 15:15:40,012 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 15:15:40,018 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 15:15:40,098 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x7a8f3be3 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:40,103 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49c5894a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:40,124 DEBUG [hconnection-0x38dd441b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:40,154 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:40,167 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:40,168 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:40,181 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 15:15:40,185 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 15:15:40,213 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:15:40,214 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:40,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43019] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 15:15:40,222 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x2abe77a8 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:40,272 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f4f06b4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:40,272 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:40,284 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:40,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018872b379000a connected 2023-07-21 15:15:40,345 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=422, OpenFileDescriptor=702, MaxFileDescriptor=60000, SystemLoadAverage=908, ProcessCount=186, AvailableMemoryMB=2605 2023-07-21 15:15:40,349 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-21 15:15:40,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:40,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:40,462 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:40,480 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:40,483 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:39253 2023-07-21 15:15:40,484 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:40,496 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:40,498 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:40,510 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:40,514 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39253 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:40,528 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:392530x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:40,530 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:392530x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:15:40,531 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39253-0x1018872b379000b connected 2023-07-21 15:15:40,532 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 15:15:40,533 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:40,533 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39253 2023-07-21 15:15:40,534 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39253 2023-07-21 15:15:40,534 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39253 2023-07-21 15:15:40,534 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39253 2023-07-21 15:15:40,536 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39253 2023-07-21 15:15:40,539 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:40,539 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:40,539 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:40,540 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:40,540 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:40,540 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:40,541 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:40,541 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 35827 2023-07-21 15:15:40,541 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:40,592 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:40,593 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5fc83f2f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:40,593 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:40,594 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@77e496fe{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:40,845 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:40,846 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:40,847 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:40,847 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:15:40,861 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:40,862 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@79c5b1d7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-35827-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2060953752652672777/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:40,864 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@26a4e210{HTTP/1.1, (http/1.1)}{0.0.0.0:35827} 2023-07-21 15:15:40,865 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @13676ms 2023-07-21 15:15:40,876 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:15:40,877 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:15:40,879 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:15:40,879 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:15:40,881 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:15:40,884 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ReadOnlyZKClient(139): Connect 0x16726dfc to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:40,910 DEBUG [RS:3;jenkins-hbase17:39253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18b58142, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:40,910 DEBUG [RS:3;jenkins-hbase17:39253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c8a068c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:15:40,921 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:39253 2023-07-21 15:15:40,921 INFO [RS:3;jenkins-hbase17:39253] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:15:40,921 INFO [RS:3;jenkins-hbase17:39253] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:15:40,921 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:15:40,922 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:39253, startcode=1689952540479 2023-07-21 15:15:40,922 DEBUG [RS:3;jenkins-hbase17:39253] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:15:40,937 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56841, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:15:40,937 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,937 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:40,939 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:15:40,939 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:15:40,939 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39551 2023-07-21 15:15:40,944 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:40,944 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:40,944 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:40,944 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:40,945 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,39253,1689952540479] 2023-07-21 15:15:40,944 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:40,946 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,946 WARN [RS:3;jenkins-hbase17:39253] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:15:40,946 INFO [RS:3;jenkins-hbase17:39253] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:40,946 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,946 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:40,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:40,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:40,947 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:15:40,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:40,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:40,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:40,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,954 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,954 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:40,954 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 15:15:40,954 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,956 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:40,956 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:40,961 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:40,961 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:40,962 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:40,962 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:40,963 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:15:40,963 INFO [RS:3;jenkins-hbase17:39253] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:15:40,966 INFO [RS:3;jenkins-hbase17:39253] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:15:40,972 INFO [RS:3;jenkins-hbase17:39253] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:15:40,973 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:40,976 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:15:40,986 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:40,987 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,987 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,987 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:40,988 DEBUG [RS:3;jenkins-hbase17:39253] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:41,008 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:41,008 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:41,008 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:41,036 INFO [RS:3;jenkins-hbase17:39253] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:15:41,036 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,39253,1689952540479-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:41,100 INFO [RS:3;jenkins-hbase17:39253] regionserver.Replication(203): jenkins-hbase17.apache.org,39253,1689952540479 started 2023-07-21 15:15:41,100 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,39253,1689952540479, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:39253, sessionid=0x1018872b379000b 2023-07-21 15:15:41,100 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:15:41,102 DEBUG [RS:3;jenkins-hbase17:39253] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:41,102 DEBUG [RS:3;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,39253,1689952540479' 2023-07-21 15:15:41,102 DEBUG [RS:3;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:15:41,103 DEBUG [RS:3;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:15:41,106 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:15:41,106 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:15:41,106 DEBUG [RS:3;jenkins-hbase17:39253] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:41,106 DEBUG [RS:3;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,39253,1689952540479' 2023-07-21 15:15:41,106 DEBUG [RS:3;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:41,107 DEBUG [RS:3;jenkins-hbase17:39253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:41,107 DEBUG [RS:3;jenkins-hbase17:39253] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:15:41,107 INFO [RS:3;jenkins-hbase17:39253] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:15:41,107 INFO [RS:3;jenkins-hbase17:39253] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:15:41,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:41,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:41,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:41,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:41,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:41,145 DEBUG [hconnection-0x46251d71-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:41,155 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55812, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:41,165 DEBUG [hconnection-0x46251d71-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:41,172 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56146, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:41,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:41,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:41,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:41,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:41,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:48124 deadline: 1689953741190, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:41,194 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:41,197 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:41,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:41,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:41,201 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:33925, jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:41,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:41,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:41,211 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-21 15:15:41,211 INFO [RS:3;jenkins-hbase17:39253] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C39253%2C1689952540479, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:15:41,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:41,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:41,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup deadServerGroup 2023-07-21 15:15:41,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:41,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:41,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 15:15:41,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:41,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:41,253 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:41,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:41,260 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:41,280 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:41,281 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:41,289 INFO [RS:3;jenkins-hbase17:39253] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479/jenkins-hbase17.apache.org%2C39253%2C1689952540479.1689952541213 2023-07-21 15:15:41,290 DEBUG [RS:3;jenkins-hbase17:39253] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:15:41,293 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33925] to rsgroup deadServerGroup 2023-07-21 15:15:41,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:41,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:41,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 15:15:41,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:41,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(238): Moving server region 7697a92683cfac49519e4a4111355983, which do not belong to RSGroup deadServerGroup 2023-07-21 15:15:41,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE 2023-07-21 15:15:41,317 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 15:15:41,321 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE 2023-07-21 15:15:41,322 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:41,323 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952541322"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952541322"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952541322"}]},"ts":"1689952541322"} 2023-07-21 15:15:41,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,33925,1689952536167}] 2023-07-21 15:15:41,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7697a92683cfac49519e4a4111355983, disabling compactions & flushes 2023-07-21 15:15:41,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. after waiting 0 ms 2023-07-21 15:15:41,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:41,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 7697a92683cfac49519e4a4111355983 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 15:15:41,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/.tmp/info/f7f6dd522e854d8fab91aaec79abb8df 2023-07-21 15:15:41,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/.tmp/info/f7f6dd522e854d8fab91aaec79abb8df as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/f7f6dd522e854d8fab91aaec79abb8df 2023-07-21 15:15:41,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/f7f6dd522e854d8fab91aaec79abb8df, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 15:15:41,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 7697a92683cfac49519e4a4111355983 in 331ms, sequenceid=6, compaction requested=false 2023-07-21 15:15:41,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 15:15:41,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 15:15:41,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:41,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:15:41,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 7697a92683cfac49519e4a4111355983 move to jenkins-hbase17.apache.org,39253,1689952540479 record at close sequenceid=6 2023-07-21 15:15:41,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:41,873 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=CLOSED 2023-07-21 15:15:41,873 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952541873"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952541873"}]},"ts":"1689952541873"} 2023-07-21 15:15:41,892 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 15:15:41,893 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,33925,1689952536167 in 558 msec 2023-07-21 15:15:41,897 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,39253,1689952540479; forceNewPlan=false, retain=false 2023-07-21 15:15:42,048 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:15:42,048 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,049 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952542048"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952542048"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952542048"}]},"ts":"1689952542048"} 2023-07-21 15:15:42,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:42,208 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,208 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:42,213 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43270, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:42,224 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:42,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7697a92683cfac49519e4a4111355983, NAME => 'hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:42,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:42,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,229 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,231 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:15:42,231 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:15:42,232 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7697a92683cfac49519e4a4111355983 columnFamilyName info 2023-07-21 15:15:42,250 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/f7f6dd522e854d8fab91aaec79abb8df 2023-07-21 15:15:42,251 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(310): Store=7697a92683cfac49519e4a4111355983/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:42,253 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:42,264 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7697a92683cfac49519e4a4111355983; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9907976000, jitterRate=-0.07724782824516296}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:42,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:15:42,266 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., pid=14, masterSystemTime=1689952542207 2023-07-21 15:15:42,272 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:42,273 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:42,274 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,274 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952542274"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952542274"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952542274"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952542274"}]},"ts":"1689952542274"} 2023-07-21 15:15:42,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 15:15:42,285 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,39253,1689952540479 in 225 msec 2023-07-21 15:15:42,289 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE in 971 msec 2023-07-21 15:15:42,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 15:15:42,319 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,33925,1689952536167] are moved back to default 2023-07-21 15:15:42,319 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-21 15:15:42,320 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:42,326 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:42,326 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:42,333 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-21 15:15:42,333 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:42,339 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:42,343 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35242, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:42,343 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33925] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33925,1689952536167' ***** 2023-07-21 15:15:42,343 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33925] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x38dd441b 2023-07-21 15:15:42,343 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:15:42,350 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:42,361 INFO [RS:0;jenkins-hbase17:33925] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6bd6e5db{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:42,368 INFO [RS:0;jenkins-hbase17:33925] server.AbstractConnector(383): Stopped ServerConnector@51871d5f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:42,368 INFO [RS:0;jenkins-hbase17:33925] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:15:42,370 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:42,371 INFO [RS:0;jenkins-hbase17:33925] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4de48637{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:15:42,374 INFO [RS:0;jenkins-hbase17:33925] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50350a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:15:42,380 INFO [RS:0;jenkins-hbase17:33925] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:15:42,380 INFO [RS:0;jenkins-hbase17:33925] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:15:42,381 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:15:42,381 INFO [RS:0;jenkins-hbase17:33925] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:15:42,381 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:42,381 DEBUG [RS:0;jenkins-hbase17:33925] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4dd2e2fc to 127.0.0.1:62052 2023-07-21 15:15:42,381 DEBUG [RS:0;jenkins-hbase17:33925] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:42,381 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33925,1689952536167; all regions closed. 2023-07-21 15:15:42,401 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 15:15:42,401 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 15:15:42,413 DEBUG [RS:0;jenkins-hbase17:33925] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:15:42,413 INFO [RS:0;jenkins-hbase17:33925] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C33925%2C1689952536167:(num 1689952538620) 2023-07-21 15:15:42,413 DEBUG [RS:0;jenkins-hbase17:33925] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:42,414 INFO [RS:0;jenkins-hbase17:33925] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:42,415 INFO [RS:0;jenkins-hbase17:33925] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:15:42,415 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:15:42,415 INFO [RS:0;jenkins-hbase17:33925] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:15:42,415 INFO [RS:0;jenkins-hbase17:33925] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:15:42,415 INFO [RS:0;jenkins-hbase17:33925] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:15:42,419 INFO [RS:0;jenkins-hbase17:33925] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33925 2023-07-21 15:15:42,434 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:42,436 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:42,436 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:42,438 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:42,438 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:42,439 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33925,1689952536167] 2023-07-21 15:15:42,439 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33925,1689952536167; numProcessing=1 2023-07-21 15:15:42,440 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:42,441 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 2023-07-21 15:15:42,441 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:42,441 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:42,442 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33925,1689952536167 already deleted, retry=false 2023-07-21 15:15:42,442 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,33925,1689952536167 on jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:42,444 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:42,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:42,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:42,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:42,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:42,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,449 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 znode expired, triggering replicatorRemoved event 2023-07-21 15:15:42,449 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 znode expired, triggering replicatorRemoved event 2023-07-21 15:15:42,454 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:42,454 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-21 15:15:42,455 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:42,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,455 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,33925,1689952536167 znode expired, triggering replicatorRemoved event 2023-07-21 15:15:42,460 WARN [RS-EventLoopGroup-5-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:33925 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:33925 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:15:42,461 DEBUG [RS-EventLoopGroup-5-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:33925 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:33925 2023-07-21 15:15:42,467 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,33925,1689952536167, splitWal=true, meta=false 2023-07-21 15:15:42,469 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=15 for jenkins-hbase17.apache.org,33925,1689952536167 (carryingMeta=false) jenkins-hbase17.apache.org,33925,1689952536167/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4fa9b6b6[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:15:42,470 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:42,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:42,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:42,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:42,475 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:42,476 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:42,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:42,478 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:42,480 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=15, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33925,1689952536167, splitWal=true, meta=false 2023-07-21 15:15:42,481 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:42,483 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:42,485 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,33925,1689952536167 had 0 regions 2023-07-21 15:15:42,485 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 15:15:42,487 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:42,491 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=15, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33925,1689952536167, splitWal=true, meta=false, isMeta: false 2023-07-21 15:15:42,492 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:15:42,508 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167-splitting 2023-07-21 15:15:42,510 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167-splitting dir is empty, no logs to split. 2023-07-21 15:15:42,510 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,33925,1689952536167 WAL count=0, meta=false 2023-07-21 15:15:42,519 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167-splitting dir is empty, no logs to split. 2023-07-21 15:15:42,519 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,33925,1689952536167 WAL count=0, meta=false 2023-07-21 15:15:42,519 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,33925,1689952536167 WAL splitting is done? wals=0, meta=false 2023-07-21 15:15:42,527 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,33925,1689952536167 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33925,1689952536167-splitting does not exist. 2023-07-21 15:15:42,542 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,33925,1689952536167 after splitting done 2023-07-21 15:15:42,542 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,33925,1689952536167 from processing; numProcessing=0 2023-07-21 15:15:42,545 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,33925,1689952536167, splitWal=true, meta=false in 93 msec 2023-07-21 15:15:42,577 DEBUG [hconnection-0xa7c10e1-shared-pool-6] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:42,579 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43276, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:42,583 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:42,585 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33925-0x1018872b3790001, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:42,585 INFO [RS:0;jenkins-hbase17:33925] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33925,1689952536167; zookeeper connection closed. 2023-07-21 15:15:42,585 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@12f69ebb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@12f69ebb 2023-07-21 15:15:42,640 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:42,640 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:42,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:42,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:42,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:42,655 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:42,655 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:42,679 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:42,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:42,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 15:15:42,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:15:42,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:42,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:42,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:42,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:42,703 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:33925] to rsgroup default 2023-07-21 15:15:42,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase17.apache.org:33925 during move-to-default rsgroup because not online 2023-07-21 15:15:42,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:42,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-21 15:15:42,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:42,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-21 15:15:42,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-21 15:15:42,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-21 15:15:42,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:42,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup deadServerGroup 2023-07-21 15:15:42,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:42,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:42,743 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:42,752 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 15:15:42,770 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:42,771 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:42,772 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:42,772 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:42,772 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:42,772 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:42,772 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:42,779 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41299 2023-07-21 15:15:42,780 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:42,790 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:42,793 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:42,796 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:42,799 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41299 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:42,824 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:412990x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:42,826 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:412990x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:15:42,829 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41299-0x1018872b379000d connected 2023-07-21 15:15:42,830 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 15:15:42,833 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:42,848 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41299 2023-07-21 15:15:42,852 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41299 2023-07-21 15:15:42,853 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41299 2023-07-21 15:15:42,853 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41299 2023-07-21 15:15:42,854 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41299 2023-07-21 15:15:42,856 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:42,856 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:42,856 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:42,857 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:42,857 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:42,857 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:42,858 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:42,858 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 36351 2023-07-21 15:15:42,859 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:42,889 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:42,889 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@727b6878{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:42,890 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:42,890 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7955a1e3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:43,013 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:43,014 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:43,014 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:43,014 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:15:43,017 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:43,018 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@13d1a755{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-36351-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5123383608202712647/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:43,020 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@40ac24e8{HTTP/1.1, (http/1.1)}{0.0.0.0:36351} 2023-07-21 15:15:43,021 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @15832ms 2023-07-21 15:15:43,023 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:15:43,027 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:15:43,034 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:15:43,034 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:15:43,045 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:15:43,049 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ReadOnlyZKClient(139): Connect 0x63998d6f to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:43,094 DEBUG [RS:4;jenkins-hbase17:41299] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d5e6f6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:43,095 DEBUG [RS:4;jenkins-hbase17:41299] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6961e61f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:15:43,106 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase17:41299 2023-07-21 15:15:43,106 INFO [RS:4;jenkins-hbase17:41299] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:15:43,106 INFO [RS:4;jenkins-hbase17:41299] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:15:43,106 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:15:43,107 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43019,1689952533620 with isa=jenkins-hbase17.apache.org/136.243.18.41:41299, startcode=1689952542769 2023-07-21 15:15:43,107 DEBUG [RS:4;jenkins-hbase17:41299] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:15:43,110 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51335, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:15:43,111 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43019] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,111 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:15:43,113 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:15:43,113 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:15:43,113 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39551 2023-07-21 15:15:43,115 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:43,115 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:43,116 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:43,116 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:43,117 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,41299,1689952542769] 2023-07-21 15:15:43,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:43,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:43,119 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ZKUtil(162): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,119 WARN [RS:4;jenkins-hbase17:41299] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:15:43,119 INFO [RS:4;jenkins-hbase17:41299] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:15:43,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:43,119 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:43,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:43,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:43,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:43,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:43,124 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:43,125 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:15:43,125 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,128 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43019,1689952533620] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 15:15:43,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:43,137 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ZKUtil(162): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:43,138 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ZKUtil(162): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:43,139 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ZKUtil(162): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,140 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ZKUtil(162): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:43,143 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:15:43,143 INFO [RS:4;jenkins-hbase17:41299] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:15:43,153 INFO [RS:4;jenkins-hbase17:41299] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:15:43,154 INFO [RS:4;jenkins-hbase17:41299] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:15:43,155 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:43,160 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:15:43,162 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:43,162 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,162 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,162 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,163 DEBUG [RS:4;jenkins-hbase17:41299] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:15:43,172 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:43,173 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:43,173 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:43,195 INFO [RS:4;jenkins-hbase17:41299] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:15:43,195 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41299,1689952542769-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:15:43,225 INFO [RS:4;jenkins-hbase17:41299] regionserver.Replication(203): jenkins-hbase17.apache.org,41299,1689952542769 started 2023-07-21 15:15:43,225 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,41299,1689952542769, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:41299, sessionid=0x1018872b379000d 2023-07-21 15:15:43,225 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:15:43,227 DEBUG [RS:4;jenkins-hbase17:41299] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,227 DEBUG [RS:4;jenkins-hbase17:41299] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41299,1689952542769' 2023-07-21 15:15:43,227 DEBUG [RS:4;jenkins-hbase17:41299] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:15:43,227 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:43,228 DEBUG [RS:4;jenkins-hbase17:41299] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:15:43,230 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:15:43,231 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:15:43,231 DEBUG [RS:4;jenkins-hbase17:41299] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:43,231 DEBUG [RS:4;jenkins-hbase17:41299] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41299,1689952542769' 2023-07-21 15:15:43,231 DEBUG [RS:4;jenkins-hbase17:41299] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:43,231 DEBUG [RS:4;jenkins-hbase17:41299] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:43,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:43,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:43,232 DEBUG [RS:4;jenkins-hbase17:41299] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:15:43,232 INFO [RS:4;jenkins-hbase17:41299] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:15:43,232 INFO [RS:4;jenkins-hbase17:41299] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:15:43,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:43,235 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:43,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:43,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:43,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:43,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:43,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:48124 deadline: 1689953743247, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:43,248 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:43,250 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:43,252 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:43,252 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:43,253 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:43,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:43,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:43,302 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=475 (was 422) Potentially hanging thread: Session-HouseKeeper-69ecf8d9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp883848542-725-acceptor-0@58114d69-ServerConnector@40ac24e8{HTTP/1.1, (http/1.1)}{0.0.0.0:36351} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x16726dfc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase17:41299 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1963094838_17 at /127.0.0.1:43498 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x63998d6f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x63998d6f-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp245165333-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp245165333-636-acceptor-0@7d1bfdc-ServerConnector@26a4e210{HTTP/1.1, (http/1.1)}{0.0.0.0:35827} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1943811146) connection to localhost.localdomain/127.0.0.1:37247 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x16726dfc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp883848542-727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1585307132_17 at /127.0.0.1:43490 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1585307132_17 at /127.0.0.1:60638 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x16726dfc-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase17:39253Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:39253 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1585307132_17 at /127.0.0.1:59770 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-metaLookup-shared--pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-6a36302b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp245165333-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp245165333-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1943811146) connection to localhost.localdomain/127.0.0.1:37247 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp883848542-732 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:39253-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase17:41299-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp883848542-726 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp883848542-731 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp883848542-730 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x63998d6f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp245165333-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp883848542-729 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:41299Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1585307132_17 at /127.0.0.1:59796 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3-prefix:jenkins-hbase17.apache.org,39253,1689952540479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp245165333-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1585307132_17 at /127.0.0.1:54780 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp883848542-728 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp245165333-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x46251d71-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp245165333-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1963094838_17 at /127.0.0.1:39376 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=747 (was 702) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=908 (was 908), ProcessCount=189 (was 186) - ProcessCount LEAK? -, AvailableMemoryMB=2298 (was 2605) 2023-07-21 15:15:43,332 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=475, OpenFileDescriptor=747, MaxFileDescriptor=60000, SystemLoadAverage=908, ProcessCount=189, AvailableMemoryMB=2297 2023-07-21 15:15:43,333 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-21 15:15:43,339 INFO [RS:4;jenkins-hbase17:41299] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41299%2C1689952542769, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:15:43,348 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:43,348 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:43,351 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:43,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:43,352 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:43,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:43,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:43,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:43,394 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:15:43,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:43,400 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:15:43,400 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:15:43,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:43,407 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:43,409 INFO [RS:4;jenkins-hbase17:41299] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769/jenkins-hbase17.apache.org%2C41299%2C1689952542769.1689952543343 2023-07-21 15:15:43,409 DEBUG [RS:4;jenkins-hbase17:41299] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK]] 2023-07-21 15:15:43,413 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:43,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:43,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:43,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:43,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:43,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:43,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:43,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:43,427 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:43,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:43,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:48124 deadline: 1689953743427, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:43,428 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:43,429 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:43,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:43,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:43,431 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:43,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:43,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:43,433 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-21 15:15:43,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$16(3053): Client=jenkins//136.243.18.41 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-21 15:15:43,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-21 15:15:43,462 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:15:43,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 15:15:43,465 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 24 msec 2023-07-21 15:15:43,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 15:15:43,578 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:43,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=17, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:43,584 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:43,589 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 17 2023-07-21 15:15:43,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-21 15:15:43,601 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:43,602 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:43,603 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:43,618 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:43,623 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:43,624 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 empty. 2023-07-21 15:15:43,625 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:43,625 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-21 15:15:43,665 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:43,666 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => afae71fd235935cd48fe2f30974b4199, NAME => 'Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:43,687 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:43,687 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing afae71fd235935cd48fe2f30974b4199, disabling compactions & flushes 2023-07-21 15:15:43,687 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:43,687 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:43,687 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. after waiting 0 ms 2023-07-21 15:15:43,687 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:43,687 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:43,688 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for afae71fd235935cd48fe2f30974b4199: 2023-07-21 15:15:43,691 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:43,694 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952543693"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952543693"}]},"ts":"1689952543693"} 2023-07-21 15:15:43,696 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:43,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-21 15:15:43,698 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:43,698 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952543698"}]},"ts":"1689952543698"} 2023-07-21 15:15:43,700 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-21 15:15:43,703 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:43,703 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:43,703 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:43,703 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:43,703 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:15:43,703 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:43,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, ASSIGN}] 2023-07-21 15:15:43,706 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, ASSIGN 2023-07-21 15:15:43,707 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:43,843 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:15:43,844 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 15:15:43,844 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:15:43,844 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 15:15:43,845 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:15:43,845 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 15:15:43,857 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:15:43,858 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=afae71fd235935cd48fe2f30974b4199, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:43,858 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952543858"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952543858"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952543858"}]},"ts":"1689952543858"} 2023-07-21 15:15:43,862 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE; OpenRegionProcedure afae71fd235935cd48fe2f30974b4199, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:43,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-21 15:15:44,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => afae71fd235935cd48fe2f30974b4199, NAME => 'Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:44,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:44,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,024 INFO [StoreOpener-afae71fd235935cd48fe2f30974b4199-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,028 DEBUG [StoreOpener-afae71fd235935cd48fe2f30974b4199-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/f 2023-07-21 15:15:44,028 DEBUG [StoreOpener-afae71fd235935cd48fe2f30974b4199-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/f 2023-07-21 15:15:44,028 INFO [StoreOpener-afae71fd235935cd48fe2f30974b4199-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region afae71fd235935cd48fe2f30974b4199 columnFamilyName f 2023-07-21 15:15:44,029 INFO [StoreOpener-afae71fd235935cd48fe2f30974b4199-1] regionserver.HStore(310): Store=afae71fd235935cd48fe2f30974b4199/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:44,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:44,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened afae71fd235935cd48fe2f30974b4199; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10564202880, jitterRate=-0.016131937503814697}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:44,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for afae71fd235935cd48fe2f30974b4199: 2023-07-21 15:15:44,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199., pid=19, masterSystemTime=1689952544014 2023-07-21 15:15:44,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,052 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=afae71fd235935cd48fe2f30974b4199, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:44,052 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952544052"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952544052"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952544052"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952544052"}]},"ts":"1689952544052"} 2023-07-21 15:15:44,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-21 15:15:44,065 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; OpenRegionProcedure afae71fd235935cd48fe2f30974b4199, server=jenkins-hbase17.apache.org,36355,1689952536596 in 196 msec 2023-07-21 15:15:44,068 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 15:15:44,070 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, ASSIGN in 359 msec 2023-07-21 15:15:44,071 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:44,071 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952544071"}]},"ts":"1689952544071"} 2023-07-21 15:15:44,074 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-21 15:15:44,077 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:44,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 498 msec 2023-07-21 15:15:44,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-21 15:15:44,203 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 17 completed 2023-07-21 15:15:44,204 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:44,210 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:44,213 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56162, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:44,217 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:44,243 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:44,244 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:44,250 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43280, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:44,251 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:44,254 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:44,264 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-21 15:15:44,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCreateAndAssign 2023-07-21 15:15:44,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,298 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952544298"}]},"ts":"1689952544298"} 2023-07-21 15:15:44,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:15:44,302 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-21 15:15:44,304 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-21 15:15:44,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, UNASSIGN}] 2023-07-21 15:15:44,310 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, UNASSIGN 2023-07-21 15:15:44,312 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=afae71fd235935cd48fe2f30974b4199, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:44,312 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952544312"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952544312"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952544312"}]},"ts":"1689952544312"} 2023-07-21 15:15:44,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure afae71fd235935cd48fe2f30974b4199, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:44,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:15:44,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing afae71fd235935cd48fe2f30974b4199, disabling compactions & flushes 2023-07-21 15:15:44,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. after waiting 0 ms 2023-07-21 15:15:44,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,483 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:15:44,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:44,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199. 2023-07-21 15:15:44,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for afae71fd235935cd48fe2f30974b4199: 2023-07-21 15:15:44,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,501 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=afae71fd235935cd48fe2f30974b4199, regionState=CLOSED 2023-07-21 15:15:44,501 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689952544501"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952544501"}]},"ts":"1689952544501"} 2023-07-21 15:15:44,513 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 15:15:44,513 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure afae71fd235935cd48fe2f30974b4199, server=jenkins-hbase17.apache.org,36355,1689952536596 in 187 msec 2023-07-21 15:15:44,517 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 15:15:44,517 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=afae71fd235935cd48fe2f30974b4199, UNASSIGN in 207 msec 2023-07-21 15:15:44,519 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952544519"}]},"ts":"1689952544519"} 2023-07-21 15:15:44,521 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-21 15:15:44,522 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-21 15:15:44,532 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 258 msec 2023-07-21 15:15:44,590 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 15:15:44,591 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:15:44,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 15:15:44,604 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 20 completed 2023-07-21 15:15:44,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCreateAndAssign 2023-07-21 15:15:44,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,620 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-21 15:15:44,622 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:44,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:44,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:44,630 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 15:15:44,634 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/recovered.edits] 2023-07-21 15:15:44,645 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199/recovered.edits/4.seqid 2023-07-21 15:15:44,646 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndAssign/afae71fd235935cd48fe2f30974b4199 2023-07-21 15:15:44,646 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-21 15:15:44,650 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,667 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-21 15:15:44,705 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-21 15:15:44,708 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,708 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-21 15:15:44,708 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952544708"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:44,711 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:15:44,711 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => afae71fd235935cd48fe2f30974b4199, NAME => 'Group_testCreateAndAssign,,1689952543575.afae71fd235935cd48fe2f30974b4199.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:15:44,711 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-21 15:15:44,712 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952544712"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:44,714 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-21 15:15:44,716 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:15:44,718 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 104 msec 2023-07-21 15:15:44,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 15:15:44,732 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 23 completed 2023-07-21 15:15:44,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:44,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:44,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:44,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:44,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:44,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:44,743 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:44,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:44,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:44,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:44,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:44,756 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:44,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:44,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:44,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:44,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:44,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:44,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:44,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:44,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:44,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:44,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 161 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953744772, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:44,773 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:44,775 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:44,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:44,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:44,777 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:44,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:44,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:44,800 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=493 (was 475) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1963094838_17 at /127.0.0.1:43498 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1664477655_17 at /127.0.0.1:59802 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1664477655_17 at /127.0.0.1:60654 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3-prefix:jenkins-hbase17.apache.org,41299,1689952542769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1664477655_17 at /127.0.0.1:43514 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_264418683_17 at /127.0.0.1:54780 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=767 (was 747) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=908 (was 908), ProcessCount=186 (was 189), AvailableMemoryMB=2213 (was 2297) 2023-07-21 15:15:44,818 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=493, OpenFileDescriptor=767, MaxFileDescriptor=60000, SystemLoadAverage=908, ProcessCount=186, AvailableMemoryMB=2213 2023-07-21 15:15:44,819 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-21 15:15:44,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:44,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:44,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:44,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:44,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:44,828 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:44,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:44,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:44,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:44,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:44,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:44,843 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:44,844 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:44,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:44,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:44,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:44,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:44,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:44,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:44,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:44,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:44,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 189 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953744857, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:44,858 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:44,860 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:44,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:44,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:44,862 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:44,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:44,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:44,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:44,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:44,870 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:44,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 24 2023-07-21 15:15:44,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:15:44,873 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:44,874 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:44,874 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:44,879 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:44,895 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:44,896 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 empty. 2023-07-21 15:15:44,896 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d empty. 2023-07-21 15:15:44,896 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd empty. 2023-07-21 15:15:44,896 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e empty. 2023-07-21 15:15:44,896 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 empty. 2023-07-21 15:15:44,896 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 empty. 2023-07-21 15:15:44,898 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:44,898 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:44,898 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:44,898 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:44,899 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:44,899 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:44,899 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:44,899 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e 2023-07-21 15:15:44,900 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 empty. 2023-07-21 15:15:44,900 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda empty. 2023-07-21 15:15:44,900 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb empty. 2023-07-21 15:15:44,900 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:44,900 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:44,901 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 empty. 2023-07-21 15:15:44,901 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:44,901 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:44,902 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-21 15:15:44,953 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:44,958 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 80b437b66b0260165b4cc53d1ffc1dcd, NAME => 'Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:44,959 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9dfab0ef5bbbf401fff5d78540aa51fb, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:44,959 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => f223f2178366022812329faf0269386e, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:44,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:15:45,048 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,053 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 80b437b66b0260165b4cc53d1ffc1dcd, disabling compactions & flushes 2023-07-21 15:15:45,053 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,053 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,053 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. after waiting 0 ms 2023-07-21 15:15:45,053 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,054 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,054 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 80b437b66b0260165b4cc53d1ffc1dcd: 2023-07-21 15:15:45,054 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7a8815e28c428ab74e9401574fd3fc66, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,058 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,058 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing f223f2178366022812329faf0269386e, disabling compactions & flushes 2023-07-21 15:15:45,058 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,058 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,058 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. after waiting 0 ms 2023-07-21 15:15:45,058 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,058 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,058 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for f223f2178366022812329faf0269386e: 2023-07-21 15:15:45,059 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => b333d76f8a289fa3e9d3a85ccac2d993, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,095 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,096 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,096 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 7a8815e28c428ab74e9401574fd3fc66, disabling compactions & flushes 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing b333d76f8a289fa3e9d3a85ccac2d993, disabling compactions & flushes 2023-07-21 15:15:45,097 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,097 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. after waiting 0 ms 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. after waiting 0 ms 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,097 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,097 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 7a8815e28c428ab74e9401574fd3fc66: 2023-07-21 15:15:45,097 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for b333d76f8a289fa3e9d3a85ccac2d993: 2023-07-21 15:15:45,098 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 00f6d4ac07f0fcd31f8192e380860bb6, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,098 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => f447ee9cb6fc700f8cfecb803531b992, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,124 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,124 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 00f6d4ac07f0fcd31f8192e380860bb6, disabling compactions & flushes 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing f447ee9cb6fc700f8cfecb803531b992, disabling compactions & flushes 2023-07-21 15:15:45,126 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:45,126 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. after waiting 0 ms 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. after waiting 0 ms 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:45,126 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:45,126 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 00f6d4ac07f0fcd31f8192e380860bb6: 2023-07-21 15:15:45,126 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for f447ee9cb6fc700f8cfecb803531b992: 2023-07-21 15:15:45,127 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8fd6bb3a8252aaec41c83e0b948615a4, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,127 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1b53823572d856ffa6583cbccbf3885d, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,147 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,147 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 1b53823572d856ffa6583cbccbf3885d, disabling compactions & flushes 2023-07-21 15:15:45,147 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,147 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,147 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. after waiting 0 ms 2023-07-21 15:15:45,147 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,147 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,147 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 1b53823572d856ffa6583cbccbf3885d: 2023-07-21 15:15:45,148 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => da36e046211cb4e43497513d34f32eda, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:45,152 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,155 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 8fd6bb3a8252aaec41c83e0b948615a4, disabling compactions & flushes 2023-07-21 15:15:45,155 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,155 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,155 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. after waiting 0 ms 2023-07-21 15:15:45,155 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,155 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,155 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 8fd6bb3a8252aaec41c83e0b948615a4: 2023-07-21 15:15:45,166 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,166 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing da36e046211cb4e43497513d34f32eda, disabling compactions & flushes 2023-07-21 15:15:45,166 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,166 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,166 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. after waiting 0 ms 2023-07-21 15:15:45,166 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,166 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,166 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for da36e046211cb4e43497513d34f32eda: 2023-07-21 15:15:45,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:15:45,450 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,450 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 9dfab0ef5bbbf401fff5d78540aa51fb, disabling compactions & flushes 2023-07-21 15:15:45,450 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,450 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,450 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. after waiting 0 ms 2023-07-21 15:15:45,450 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,450 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,450 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 9dfab0ef5bbbf401fff5d78540aa51fb: 2023-07-21 15:15:45,455 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:45,456 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,456 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689952544865.f223f2178366022812329faf0269386e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,456 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,456 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,456 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,456 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,457 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,457 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,457 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689952544865.da36e046211cb4e43497513d34f32eda.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,457 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545456"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952545456"}]},"ts":"1689952545456"} 2023-07-21 15:15:45,461 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-21 15:15:45,463 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:45,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952545463"}]},"ts":"1689952545463"} 2023-07-21 15:15:45,466 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-21 15:15:45,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:45,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:45,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:45,470 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:45,470 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:15:45,470 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:45,471 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, ASSIGN}, {pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, ASSIGN}, {pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, ASSIGN}, {pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, ASSIGN}, {pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, ASSIGN}, {pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, ASSIGN}, {pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, ASSIGN}, {pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, ASSIGN}, {pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, ASSIGN}, {pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, ASSIGN}] 2023-07-21 15:15:45,477 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, ASSIGN 2023-07-21 15:15:45,478 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, ASSIGN 2023-07-21 15:15:45,479 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, ASSIGN 2023-07-21 15:15:45,480 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38527,1689952536414; forceNewPlan=false, retain=false 2023-07-21 15:15:45,483 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39253,1689952540479; forceNewPlan=false, retain=false 2023-07-21 15:15:45,483 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39253,1689952540479; forceNewPlan=false, retain=false 2023-07-21 15:15:45,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:15:45,485 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, ASSIGN 2023-07-21 15:15:45,485 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, ASSIGN 2023-07-21 15:15:45,486 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, ASSIGN 2023-07-21 15:15:45,486 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, ASSIGN 2023-07-21 15:15:45,487 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:45,487 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38527,1689952536414; forceNewPlan=false, retain=false 2023-07-21 15:15:45,487 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,39253,1689952540479; forceNewPlan=false, retain=false 2023-07-21 15:15:45,487 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41299,1689952542769; forceNewPlan=false, retain=false 2023-07-21 15:15:45,487 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, ASSIGN 2023-07-21 15:15:45,489 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, ASSIGN 2023-07-21 15:15:45,489 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38527,1689952536414; forceNewPlan=false, retain=false 2023-07-21 15:15:45,489 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, ASSIGN 2023-07-21 15:15:45,492 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41299,1689952542769; forceNewPlan=false, retain=false 2023-07-21 15:15:45,492 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:45,631 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-21 15:15:45,637 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=1b53823572d856ffa6583cbccbf3885d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:45,637 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=f447ee9cb6fc700f8cfecb803531b992, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:45,637 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=9dfab0ef5bbbf401fff5d78540aa51fb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:45,637 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=f223f2178366022812329faf0269386e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:45,638 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545637"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545637"}]},"ts":"1689952545637"} 2023-07-21 15:15:45,637 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=7a8815e28c428ab74e9401574fd3fc66, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:45,638 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689952544865.f223f2178366022812329faf0269386e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545637"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545637"}]},"ts":"1689952545637"} 2023-07-21 15:15:45,638 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545637"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545637"}]},"ts":"1689952545637"} 2023-07-21 15:15:45,638 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545637"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545637"}]},"ts":"1689952545637"} 2023-07-21 15:15:45,638 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545637"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545637"}]},"ts":"1689952545637"} 2023-07-21 15:15:45,640 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=27, state=RUNNABLE; OpenRegionProcedure 9dfab0ef5bbbf401fff5d78540aa51fb, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:45,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=26, state=RUNNABLE; OpenRegionProcedure f223f2178366022812329faf0269386e, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:45,643 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure 7a8815e28c428ab74e9401574fd3fc66, server=jenkins-hbase17.apache.org,41299,1689952542769}] 2023-07-21 15:15:45,647 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=32, state=RUNNABLE; OpenRegionProcedure 1b53823572d856ffa6583cbccbf3885d, server=jenkins-hbase17.apache.org,41299,1689952542769}] 2023-07-21 15:15:45,648 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=da36e046211cb4e43497513d34f32eda, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:45,648 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689952544865.da36e046211cb4e43497513d34f32eda.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952545648"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545648"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545648"}]},"ts":"1689952545648"} 2023-07-21 15:15:45,649 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=00f6d4ac07f0fcd31f8192e380860bb6, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:45,652 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545649"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545649"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545649"}]},"ts":"1689952545649"} 2023-07-21 15:15:45,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=31, state=RUNNABLE; OpenRegionProcedure f447ee9cb6fc700f8cfecb803531b992, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:45,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=34, state=RUNNABLE; OpenRegionProcedure da36e046211cb4e43497513d34f32eda, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:45,653 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=80b437b66b0260165b4cc53d1ffc1dcd, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:45,654 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952545653"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545653"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545653"}]},"ts":"1689952545653"} 2023-07-21 15:15:45,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure 00f6d4ac07f0fcd31f8192e380860bb6, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:45,655 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=8fd6bb3a8252aaec41c83e0b948615a4, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:45,655 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545655"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545655"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545655"}]},"ts":"1689952545655"} 2023-07-21 15:15:45,657 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=25, state=RUNNABLE; OpenRegionProcedure 80b437b66b0260165b4cc53d1ffc1dcd, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:45,658 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=b333d76f8a289fa3e9d3a85ccac2d993, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:45,659 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545658"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952545658"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952545658"}]},"ts":"1689952545658"} 2023-07-21 15:15:45,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=33, state=RUNNABLE; OpenRegionProcedure 8fd6bb3a8252aaec41c83e0b948615a4, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:45,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=29, state=RUNNABLE; OpenRegionProcedure b333d76f8a289fa3e9d3a85ccac2d993, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:45,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,798 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:45,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9dfab0ef5bbbf401fff5d78540aa51fb, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-21 15:15:45,798 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:15:45,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,800 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59000, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:15:45,801 INFO [StoreOpener-9dfab0ef5bbbf401fff5d78540aa51fb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,803 DEBUG [StoreOpener-9dfab0ef5bbbf401fff5d78540aa51fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/f 2023-07-21 15:15:45,803 DEBUG [StoreOpener-9dfab0ef5bbbf401fff5d78540aa51fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/f 2023-07-21 15:15:45,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,804 INFO [StoreOpener-9dfab0ef5bbbf401fff5d78540aa51fb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9dfab0ef5bbbf401fff5d78540aa51fb columnFamilyName f 2023-07-21 15:15:45,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a8815e28c428ab74e9401574fd3fc66, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-21 15:15:45,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,805 INFO [StoreOpener-9dfab0ef5bbbf401fff5d78540aa51fb-1] regionserver.HStore(310): Store=9dfab0ef5bbbf401fff5d78540aa51fb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,808 INFO [StoreOpener-7a8815e28c428ab74e9401574fd3fc66-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,810 DEBUG [StoreOpener-7a8815e28c428ab74e9401574fd3fc66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/f 2023-07-21 15:15:45,810 DEBUG [StoreOpener-7a8815e28c428ab74e9401574fd3fc66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/f 2023-07-21 15:15:45,811 INFO [StoreOpener-7a8815e28c428ab74e9401574fd3fc66-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a8815e28c428ab74e9401574fd3fc66 columnFamilyName f 2023-07-21 15:15:45,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80b437b66b0260165b4cc53d1ffc1dcd, NAME => 'Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-21 15:15:45,811 INFO [StoreOpener-7a8815e28c428ab74e9401574fd3fc66-1] regionserver.HStore(310): Store=7a8815e28c428ab74e9401574fd3fc66/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:45,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,814 INFO [StoreOpener-80b437b66b0260165b4cc53d1ffc1dcd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,816 DEBUG [StoreOpener-80b437b66b0260165b4cc53d1ffc1dcd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/f 2023-07-21 15:15:45,816 DEBUG [StoreOpener-80b437b66b0260165b4cc53d1ffc1dcd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/f 2023-07-21 15:15:45,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8fd6bb3a8252aaec41c83e0b948615a4, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-21 15:15:45,817 INFO [StoreOpener-80b437b66b0260165b4cc53d1ffc1dcd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80b437b66b0260165b4cc53d1ffc1dcd columnFamilyName f 2023-07-21 15:15:45,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,818 INFO [StoreOpener-80b437b66b0260165b4cc53d1ffc1dcd-1] regionserver.HStore(310): Store=80b437b66b0260165b4cc53d1ffc1dcd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,819 INFO [StoreOpener-8fd6bb3a8252aaec41c83e0b948615a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:45,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,820 DEBUG [StoreOpener-8fd6bb3a8252aaec41c83e0b948615a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/f 2023-07-21 15:15:45,823 DEBUG [StoreOpener-8fd6bb3a8252aaec41c83e0b948615a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/f 2023-07-21 15:15:45,824 INFO [StoreOpener-8fd6bb3a8252aaec41c83e0b948615a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8fd6bb3a8252aaec41c83e0b948615a4 columnFamilyName f 2023-07-21 15:15:45,825 INFO [StoreOpener-8fd6bb3a8252aaec41c83e0b948615a4-1] regionserver.HStore(310): Store=8fd6bb3a8252aaec41c83e0b948615a4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:45,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,828 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7a8815e28c428ab74e9401574fd3fc66; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10800098080, jitterRate=0.0058375149965286255}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7a8815e28c428ab74e9401574fd3fc66: 2023-07-21 15:15:45,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 9dfab0ef5bbbf401fff5d78540aa51fb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11390450560, jitterRate=0.060818374156951904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 9dfab0ef5bbbf401fff5d78540aa51fb: 2023-07-21 15:15:45,830 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66., pid=37, masterSystemTime=1689952545798 2023-07-21 15:15:45,833 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb., pid=35, masterSystemTime=1689952545793 2023-07-21 15:15:45,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,835 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:45,836 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b53823572d856ffa6583cbccbf3885d, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-21 15:15:45,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,838 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=7a8815e28c428ab74e9401574fd3fc66, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:45,838 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545838"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545838"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545838"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545838"}]},"ts":"1689952545838"} 2023-07-21 15:15:45,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,841 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=9dfab0ef5bbbf401fff5d78540aa51fb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:45,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:45,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:45,842 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545841"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545841"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545841"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545841"}]},"ts":"1689952545841"} 2023-07-21 15:15:45,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f223f2178366022812329faf0269386e, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-21 15:15:45,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion f223f2178366022812329faf0269386e 2023-07-21 15:15:45,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for f223f2178366022812329faf0269386e 2023-07-21 15:15:45,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for f223f2178366022812329faf0269386e 2023-07-21 15:15:45,847 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-21 15:15:45,847 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure 7a8815e28c428ab74e9401574fd3fc66, server=jenkins-hbase17.apache.org,41299,1689952542769 in 198 msec 2023-07-21 15:15:45,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, ASSIGN in 377 msec 2023-07-21 15:15:45,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=27 2023-07-21 15:15:45,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=27, state=SUCCESS; OpenRegionProcedure 9dfab0ef5bbbf401fff5d78540aa51fb, server=jenkins-hbase17.apache.org,39253,1689952540479 in 206 msec 2023-07-21 15:15:45,854 INFO [StoreOpener-f223f2178366022812329faf0269386e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f223f2178366022812329faf0269386e 2023-07-21 15:15:45,855 INFO [StoreOpener-1b53823572d856ffa6583cbccbf3885d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 8fd6bb3a8252aaec41c83e0b948615a4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11748760000, jitterRate=0.09418854117393494}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 8fd6bb3a8252aaec41c83e0b948615a4: 2023-07-21 15:15:45,857 DEBUG [StoreOpener-f223f2178366022812329faf0269386e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/f 2023-07-21 15:15:45,857 DEBUG [StoreOpener-f223f2178366022812329faf0269386e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/f 2023-07-21 15:15:45,858 DEBUG [StoreOpener-1b53823572d856ffa6583cbccbf3885d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/f 2023-07-21 15:15:45,858 DEBUG [StoreOpener-1b53823572d856ffa6583cbccbf3885d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/f 2023-07-21 15:15:45,859 INFO [StoreOpener-1b53823572d856ffa6583cbccbf3885d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b53823572d856ffa6583cbccbf3885d columnFamilyName f 2023-07-21 15:15:45,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4., pid=43, masterSystemTime=1689952545812 2023-07-21 15:15:45,859 INFO [StoreOpener-f223f2178366022812329faf0269386e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f223f2178366022812329faf0269386e columnFamilyName f 2023-07-21 15:15:45,860 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, ASSIGN in 381 msec 2023-07-21 15:15:45,860 INFO [StoreOpener-1b53823572d856ffa6583cbccbf3885d-1] regionserver.HStore(310): Store=1b53823572d856ffa6583cbccbf3885d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,861 INFO [StoreOpener-f223f2178366022812329faf0269386e-1] regionserver.HStore(310): Store=f223f2178366022812329faf0269386e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:45,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b333d76f8a289fa3e9d3a85ccac2d993, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-21 15:15:45,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,862 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=8fd6bb3a8252aaec41c83e0b948615a4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:45,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 80b437b66b0260165b4cc53d1ffc1dcd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10571200320, jitterRate=-0.015480250120162964}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 80b437b66b0260165b4cc53d1ffc1dcd: 2023-07-21 15:15:45,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,863 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545862"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545862"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545862"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545862"}]},"ts":"1689952545862"} 2023-07-21 15:15:45,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e 2023-07-21 15:15:45,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e 2023-07-21 15:15:45,867 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd., pid=42, masterSystemTime=1689952545807 2023-07-21 15:15:45,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:45,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for f223f2178366022812329faf0269386e 2023-07-21 15:15:45,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:45,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da36e046211cb4e43497513d34f32eda, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-21 15:15:45,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,877 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=80b437b66b0260165b4cc53d1ffc1dcd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:45,877 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952545877"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545877"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545877"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545877"}]},"ts":"1689952545877"} 2023-07-21 15:15:45,879 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=33 2023-07-21 15:15:45,879 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=33, state=SUCCESS; OpenRegionProcedure 8fd6bb3a8252aaec41c83e0b948615a4, server=jenkins-hbase17.apache.org,36355,1689952536596 in 208 msec 2023-07-21 15:15:45,881 INFO [StoreOpener-da36e046211cb4e43497513d34f32eda-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,881 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, ASSIGN in 409 msec 2023-07-21 15:15:45,881 INFO [StoreOpener-b333d76f8a289fa3e9d3a85ccac2d993-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,882 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=25 2023-07-21 15:15:45,882 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=25, state=SUCCESS; OpenRegionProcedure 80b437b66b0260165b4cc53d1ffc1dcd, server=jenkins-hbase17.apache.org,38527,1689952536414 in 222 msec 2023-07-21 15:15:45,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, ASSIGN in 412 msec 2023-07-21 15:15:45,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened f223f2178366022812329faf0269386e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10931364960, jitterRate=0.018062695860862732}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for f223f2178366022812329faf0269386e: 2023-07-21 15:15:45,890 DEBUG [StoreOpener-b333d76f8a289fa3e9d3a85ccac2d993-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/f 2023-07-21 15:15:45,891 DEBUG [StoreOpener-da36e046211cb4e43497513d34f32eda-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/f 2023-07-21 15:15:45,891 DEBUG [StoreOpener-b333d76f8a289fa3e9d3a85ccac2d993-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/f 2023-07-21 15:15:45,891 DEBUG [StoreOpener-da36e046211cb4e43497513d34f32eda-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/f 2023-07-21 15:15:45,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,891 INFO [StoreOpener-da36e046211cb4e43497513d34f32eda-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da36e046211cb4e43497513d34f32eda columnFamilyName f 2023-07-21 15:15:45,893 INFO [StoreOpener-b333d76f8a289fa3e9d3a85ccac2d993-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b333d76f8a289fa3e9d3a85ccac2d993 columnFamilyName f 2023-07-21 15:15:45,894 INFO [StoreOpener-b333d76f8a289fa3e9d3a85ccac2d993-1] regionserver.HStore(310): Store=b333d76f8a289fa3e9d3a85ccac2d993/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1b53823572d856ffa6583cbccbf3885d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10231786880, jitterRate=-0.04709059000015259}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1b53823572d856ffa6583cbccbf3885d: 2023-07-21 15:15:45,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e., pid=36, masterSystemTime=1689952545793 2023-07-21 15:15:45,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d., pid=38, masterSystemTime=1689952545798 2023-07-21 15:15:45,898 INFO [StoreOpener-da36e046211cb4e43497513d34f32eda-1] regionserver.HStore(310): Store=da36e046211cb4e43497513d34f32eda/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:45,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:45,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f447ee9cb6fc700f8cfecb803531b992, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-21 15:15:45,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:45,903 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=f223f2178366022812329faf0269386e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:45,903 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689952544865.f223f2178366022812329faf0269386e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545903"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545903"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545903"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545903"}]},"ts":"1689952545903"} 2023-07-21 15:15:45,904 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=1b53823572d856ffa6583cbccbf3885d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:45,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:45,905 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545904"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545904"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545904"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545904"}]},"ts":"1689952545904"} 2023-07-21 15:15:45,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:45,910 INFO [StoreOpener-f447ee9cb6fc700f8cfecb803531b992-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,914 DEBUG [StoreOpener-f447ee9cb6fc700f8cfecb803531b992-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/f 2023-07-21 15:15:45,914 DEBUG [StoreOpener-f447ee9cb6fc700f8cfecb803531b992-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/f 2023-07-21 15:15:45,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=26 2023-07-21 15:15:45,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=26, state=SUCCESS; OpenRegionProcedure f223f2178366022812329faf0269386e, server=jenkins-hbase17.apache.org,39253,1689952540479 in 264 msec 2023-07-21 15:15:45,916 INFO [StoreOpener-f447ee9cb6fc700f8cfecb803531b992-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f447ee9cb6fc700f8cfecb803531b992 columnFamilyName f 2023-07-21 15:15:45,918 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=32 2023-07-21 15:15:45,918 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=32, state=SUCCESS; OpenRegionProcedure 1b53823572d856ffa6583cbccbf3885d, server=jenkins-hbase17.apache.org,41299,1689952542769 in 261 msec 2023-07-21 15:15:45,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, ASSIGN in 445 msec 2023-07-21 15:15:45,923 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, ASSIGN in 448 msec 2023-07-21 15:15:45,925 INFO [StoreOpener-f447ee9cb6fc700f8cfecb803531b992-1] regionserver.HStore(310): Store=f447ee9cb6fc700f8cfecb803531b992/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:45,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened b333d76f8a289fa3e9d3a85ccac2d993; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11314525120, jitterRate=0.0537472665309906}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for b333d76f8a289fa3e9d3a85ccac2d993: 2023-07-21 15:15:45,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993., pid=44, masterSystemTime=1689952545812 2023-07-21 15:15:45,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:45,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened da36e046211cb4e43497513d34f32eda; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11565538080, jitterRate=0.07712467014789581}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:45,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for da36e046211cb4e43497513d34f32eda: 2023-07-21 15:15:45,960 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda., pid=40, masterSystemTime=1689952545807 2023-07-21 15:15:45,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:45,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:45,962 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=b333d76f8a289fa3e9d3a85ccac2d993, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:45,963 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952545962"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545962"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545962"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545962"}]},"ts":"1689952545962"} 2023-07-21 15:15:45,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:45,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:45,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 00f6d4ac07f0fcd31f8192e380860bb6, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-21 15:15:45,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:45,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:45,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:45,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:45,974 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=da36e046211cb4e43497513d34f32eda, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:45,974 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689952544865.da36e046211cb4e43497513d34f32eda.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952545973"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952545973"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952545973"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952545973"}]},"ts":"1689952545973"} 2023-07-21 15:15:45,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=34 2023-07-21 15:15:45,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=34, state=SUCCESS; OpenRegionProcedure da36e046211cb4e43497513d34f32eda, server=jenkins-hbase17.apache.org,38527,1689952536414 in 324 msec 2023-07-21 15:15:45,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:15:45,987 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=29 2023-07-21 15:15:45,987 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=29, state=SUCCESS; OpenRegionProcedure b333d76f8a289fa3e9d3a85ccac2d993, server=jenkins-hbase17.apache.org,36355,1689952536596 in 312 msec 2023-07-21 15:15:45,991 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, ASSIGN in 517 msec 2023-07-21 15:15:45,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, ASSIGN in 517 msec 2023-07-21 15:15:46,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:46,003 INFO [StoreOpener-00f6d4ac07f0fcd31f8192e380860bb6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:46,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened f447ee9cb6fc700f8cfecb803531b992; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11694992960, jitterRate=0.0891810953617096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:46,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for f447ee9cb6fc700f8cfecb803531b992: 2023-07-21 15:15:46,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992., pid=39, masterSystemTime=1689952545793 2023-07-21 15:15:46,006 DEBUG [StoreOpener-00f6d4ac07f0fcd31f8192e380860bb6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/f 2023-07-21 15:15:46,006 DEBUG [StoreOpener-00f6d4ac07f0fcd31f8192e380860bb6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/f 2023-07-21 15:15:46,006 INFO [StoreOpener-00f6d4ac07f0fcd31f8192e380860bb6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 00f6d4ac07f0fcd31f8192e380860bb6 columnFamilyName f 2023-07-21 15:15:46,007 INFO [StoreOpener-00f6d4ac07f0fcd31f8192e380860bb6-1] regionserver.HStore(310): Store=00f6d4ac07f0fcd31f8192e380860bb6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:46,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:46,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:46,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:46,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:46,010 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=f447ee9cb6fc700f8cfecb803531b992, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:46,010 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952546010"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952546010"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952546010"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952546010"}]},"ts":"1689952546010"} 2023-07-21 15:15:46,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:46,016 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=31 2023-07-21 15:15:46,016 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=31, state=SUCCESS; OpenRegionProcedure f447ee9cb6fc700f8cfecb803531b992, server=jenkins-hbase17.apache.org,39253,1689952540479 in 361 msec 2023-07-21 15:15:46,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:46,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 00f6d4ac07f0fcd31f8192e380860bb6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10765694400, jitterRate=0.002633422613143921}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:46,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 00f6d4ac07f0fcd31f8192e380860bb6: 2023-07-21 15:15:46,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, ASSIGN in 546 msec 2023-07-21 15:15:46,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6., pid=41, masterSystemTime=1689952545807 2023-07-21 15:15:46,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:46,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:46,030 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=00f6d4ac07f0fcd31f8192e380860bb6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:46,030 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952546030"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952546030"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952546030"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952546030"}]},"ts":"1689952546030"} 2023-07-21 15:15:46,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-21 15:15:46,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure 00f6d4ac07f0fcd31f8192e380860bb6, server=jenkins-hbase17.apache.org,38527,1689952536414 in 379 msec 2023-07-21 15:15:46,043 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=24 2023-07-21 15:15:46,043 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, ASSIGN in 570 msec 2023-07-21 15:15:46,044 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:46,045 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952546045"}]},"ts":"1689952546045"} 2023-07-21 15:15:46,047 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-21 15:15:46,050 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:46,053 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 1.1840 sec 2023-07-21 15:15:46,964 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCreateMultiRegion' 2023-07-21 15:15:46,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 15:15:46,992 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 24 completed 2023-07-21 15:15:46,993 DEBUG [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-21 15:15:46,994 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:47,034 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-21 15:15:47,035 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:47,036 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-21 15:15:47,040 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-21 15:15:47,040 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCreateMultiRegion 2023-07-21 15:15:47,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=45, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,052 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952547052"}]},"ts":"1689952547052"} 2023-07-21 15:15:47,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-21 15:15:47,062 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-21 15:15:47,063 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-21 15:15:47,072 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, UNASSIGN}, {pid=47, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, UNASSIGN}, {pid=48, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, UNASSIGN}, {pid=49, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, UNASSIGN}, {pid=50, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, UNASSIGN}, {pid=51, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, UNASSIGN}, {pid=52, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, UNASSIGN}, {pid=53, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, UNASSIGN}, {pid=54, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, UNASSIGN}, {pid=55, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, UNASSIGN}] 2023-07-21 15:15:47,081 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, UNASSIGN 2023-07-21 15:15:47,089 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=80b437b66b0260165b4cc53d1ffc1dcd, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:47,089 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952547088"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547088"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547088"}]},"ts":"1689952547088"} 2023-07-21 15:15:47,090 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, UNASSIGN 2023-07-21 15:15:47,090 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, UNASSIGN 2023-07-21 15:15:47,091 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, UNASSIGN 2023-07-21 15:15:47,091 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, UNASSIGN 2023-07-21 15:15:47,096 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=7a8815e28c428ab74e9401574fd3fc66, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:47,096 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547095"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547095"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547095"}]},"ts":"1689952547095"} 2023-07-21 15:15:47,096 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=b333d76f8a289fa3e9d3a85ccac2d993, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:47,096 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547096"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547096"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547096"}]},"ts":"1689952547096"} 2023-07-21 15:15:47,097 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=9dfab0ef5bbbf401fff5d78540aa51fb, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:47,097 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547097"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547097"}]},"ts":"1689952547097"} 2023-07-21 15:15:47,096 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=f223f2178366022812329faf0269386e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:47,100 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689952544865.f223f2178366022812329faf0269386e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547095"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547095"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547095"}]},"ts":"1689952547095"} 2023-07-21 15:15:47,101 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE; CloseRegionProcedure 80b437b66b0260165b4cc53d1ffc1dcd, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:47,105 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=49, state=RUNNABLE; CloseRegionProcedure b333d76f8a289fa3e9d3a85ccac2d993, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:47,105 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=47, state=RUNNABLE; CloseRegionProcedure 9dfab0ef5bbbf401fff5d78540aa51fb, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:47,113 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=48, state=RUNNABLE; CloseRegionProcedure 7a8815e28c428ab74e9401574fd3fc66, server=jenkins-hbase17.apache.org,41299,1689952542769}] 2023-07-21 15:15:47,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=46, state=RUNNABLE; CloseRegionProcedure f223f2178366022812329faf0269386e, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:47,125 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, UNASSIGN 2023-07-21 15:15:47,128 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, UNASSIGN 2023-07-21 15:15:47,129 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=da36e046211cb4e43497513d34f32eda, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:47,129 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689952544865.da36e046211cb4e43497513d34f32eda.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952547129"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547129"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547129"}]},"ts":"1689952547129"} 2023-07-21 15:15:47,130 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, UNASSIGN 2023-07-21 15:15:47,148 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=8fd6bb3a8252aaec41c83e0b948615a4, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:47,148 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547148"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547148"}]},"ts":"1689952547148"} 2023-07-21 15:15:47,149 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, UNASSIGN 2023-07-21 15:15:47,149 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=1b53823572d856ffa6583cbccbf3885d, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:47,156 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547149"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547149"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547149"}]},"ts":"1689952547149"} 2023-07-21 15:15:47,158 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, UNASSIGN 2023-07-21 15:15:47,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-21 15:15:47,169 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=54, state=RUNNABLE; CloseRegionProcedure da36e046211cb4e43497513d34f32eda, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:47,175 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=f447ee9cb6fc700f8cfecb803531b992, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:47,175 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547175"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547175"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547175"}]},"ts":"1689952547175"} 2023-07-21 15:15:47,177 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=53, state=RUNNABLE; CloseRegionProcedure 8fd6bb3a8252aaec41c83e0b948615a4, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:47,181 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=00f6d4ac07f0fcd31f8192e380860bb6, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:47,182 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547181"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952547181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952547181"}]},"ts":"1689952547181"} 2023-07-21 15:15:47,188 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=52, state=RUNNABLE; CloseRegionProcedure 1b53823572d856ffa6583cbccbf3885d, server=jenkins-hbase17.apache.org,41299,1689952542769}] 2023-07-21 15:15:47,198 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=51, state=RUNNABLE; CloseRegionProcedure f447ee9cb6fc700f8cfecb803531b992, server=jenkins-hbase17.apache.org,39253,1689952540479}] 2023-07-21 15:15:47,229 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=50, state=RUNNABLE; CloseRegionProcedure 00f6d4ac07f0fcd31f8192e380860bb6, server=jenkins-hbase17.apache.org,38527,1689952536414}] 2023-07-21 15:15:47,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:47,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing b333d76f8a289fa3e9d3a85ccac2d993, disabling compactions & flushes 2023-07-21 15:15:47,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:47,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:47,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:47,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. after waiting 0 ms 2023-07-21 15:15:47,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:47,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:47,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:47,300 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing f447ee9cb6fc700f8cfecb803531b992, disabling compactions & flushes 2023-07-21 15:15:47,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:47,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:47,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. after waiting 0 ms 2023-07-21 15:15:47,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:47,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 80b437b66b0260165b4cc53d1ffc1dcd, disabling compactions & flushes 2023-07-21 15:15:47,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:47,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7a8815e28c428ab74e9401574fd3fc66, disabling compactions & flushes 2023-07-21 15:15:47,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:47,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:47,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. after waiting 0 ms 2023-07-21 15:15:47,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:47,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:47,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. after waiting 0 ms 2023-07-21 15:15:47,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:47,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993. 2023-07-21 15:15:47,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for b333d76f8a289fa3e9d3a85ccac2d993: 2023-07-21 15:15:47,343 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:47,343 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:47,346 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=b333d76f8a289fa3e9d3a85ccac2d993, regionState=CLOSED 2023-07-21 15:15:47,346 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547346"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547346"}]},"ts":"1689952547346"} 2023-07-21 15:15:47,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 8fd6bb3a8252aaec41c83e0b948615a4, disabling compactions & flushes 2023-07-21 15:15:47,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:47,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:47,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. after waiting 0 ms 2023-07-21 15:15:47,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:47,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-21 15:15:47,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992. 2023-07-21 15:15:47,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for f447ee9cb6fc700f8cfecb803531b992: 2023-07-21 15:15:47,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66. 2023-07-21 15:15:47,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7a8815e28c428ab74e9401574fd3fc66: 2023-07-21 15:15:47,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:47,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close f223f2178366022812329faf0269386e 2023-07-21 15:15:47,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing f223f2178366022812329faf0269386e, disabling compactions & flushes 2023-07-21 15:15:47,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:47,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:47,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. after waiting 0 ms 2023-07-21 15:15:47,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:47,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=49 2023-07-21 15:15:47,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=49, state=SUCCESS; CloseRegionProcedure b333d76f8a289fa3e9d3a85ccac2d993, server=jenkins-hbase17.apache.org,36355,1689952536596 in 247 msec 2023-07-21 15:15:47,408 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=f447ee9cb6fc700f8cfecb803531b992, regionState=CLOSED 2023-07-21 15:15:47,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,408 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547407"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547407"}]},"ts":"1689952547407"} 2023-07-21 15:15:47,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4. 2023-07-21 15:15:47,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 8fd6bb3a8252aaec41c83e0b948615a4: 2023-07-21 15:15:47,410 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=7a8815e28c428ab74e9401574fd3fc66, regionState=CLOSED 2023-07-21 15:15:47,411 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b333d76f8a289fa3e9d3a85ccac2d993, UNASSIGN in 338 msec 2023-07-21 15:15:47,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:47,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:47,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1b53823572d856ffa6583cbccbf3885d, disabling compactions & flushes 2023-07-21 15:15:47,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:47,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:47,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. after waiting 0 ms 2023-07-21 15:15:47,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd. 2023-07-21 15:15:47,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 80b437b66b0260165b4cc53d1ffc1dcd: 2023-07-21 15:15:47,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:47,413 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547410"}]},"ts":"1689952547410"} 2023-07-21 15:15:47,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:47,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:47,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:47,424 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=8fd6bb3a8252aaec41c83e0b948615a4, regionState=CLOSED 2023-07-21 15:15:47,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e. 2023-07-21 15:15:47,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for f223f2178366022812329faf0269386e: 2023-07-21 15:15:47,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing da36e046211cb4e43497513d34f32eda, disabling compactions & flushes 2023-07-21 15:15:47,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:47,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:47,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. after waiting 0 ms 2023-07-21 15:15:47,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d. 2023-07-21 15:15:47,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1b53823572d856ffa6583cbccbf3885d: 2023-07-21 15:15:47,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:47,434 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=51 2023-07-21 15:15:47,434 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=80b437b66b0260165b4cc53d1ffc1dcd, regionState=CLOSED 2023-07-21 15:15:47,434 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=51, state=SUCCESS; CloseRegionProcedure f447ee9cb6fc700f8cfecb803531b992, server=jenkins-hbase17.apache.org,39253,1689952540479 in 213 msec 2023-07-21 15:15:47,434 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952547434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547434"}]},"ts":"1689952547434"} 2023-07-21 15:15:47,440 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547424"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547424"}]},"ts":"1689952547424"} 2023-07-21 15:15:47,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed f223f2178366022812329faf0269386e 2023-07-21 15:15:47,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:47,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 9dfab0ef5bbbf401fff5d78540aa51fb, disabling compactions & flushes 2023-07-21 15:15:47,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:47,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:47,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. after waiting 0 ms 2023-07-21 15:15:47,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:47,455 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=48 2023-07-21 15:15:47,456 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=48, state=SUCCESS; CloseRegionProcedure 7a8815e28c428ab74e9401574fd3fc66, server=jenkins-hbase17.apache.org,41299,1689952542769 in 308 msec 2023-07-21 15:15:47,456 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f447ee9cb6fc700f8cfecb803531b992, UNASSIGN in 362 msec 2023-07-21 15:15:47,457 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=f223f2178366022812329faf0269386e, regionState=CLOSED 2023-07-21 15:15:47,457 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689952544865.f223f2178366022812329faf0269386e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547457"}]},"ts":"1689952547457"} 2023-07-21 15:15:47,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:47,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb. 2023-07-21 15:15:47,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 9dfab0ef5bbbf401fff5d78540aa51fb: 2023-07-21 15:15:47,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda. 2023-07-21 15:15:47,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for da36e046211cb4e43497513d34f32eda: 2023-07-21 15:15:47,470 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=1b53823572d856ffa6583cbccbf3885d, regionState=CLOSED 2023-07-21 15:15:47,470 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547470"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547470"}]},"ts":"1689952547470"} 2023-07-21 15:15:47,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=7a8815e28c428ab74e9401574fd3fc66, UNASSIGN in 389 msec 2023-07-21 15:15:47,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:47,481 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=9dfab0ef5bbbf401fff5d78540aa51fb, regionState=CLOSED 2023-07-21 15:15:47,481 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547480"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547480"}]},"ts":"1689952547480"} 2023-07-21 15:15:47,481 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=55 2023-07-21 15:15:47,481 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; CloseRegionProcedure 80b437b66b0260165b4cc53d1ffc1dcd, server=jenkins-hbase17.apache.org,38527,1689952536414 in 356 msec 2023-07-21 15:15:47,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:47,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:47,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 00f6d4ac07f0fcd31f8192e380860bb6, disabling compactions & flushes 2023-07-21 15:15:47,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:47,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:47,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. after waiting 0 ms 2023-07-21 15:15:47,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:47,499 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=53 2023-07-21 15:15:47,500 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=53, state=SUCCESS; CloseRegionProcedure 8fd6bb3a8252aaec41c83e0b948615a4, server=jenkins-hbase17.apache.org,36355,1689952536596 in 292 msec 2023-07-21 15:15:47,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=46 2023-07-21 15:15:47,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=46, state=SUCCESS; CloseRegionProcedure f223f2178366022812329faf0269386e, server=jenkins-hbase17.apache.org,39253,1689952540479 in 354 msec 2023-07-21 15:15:47,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:47,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6. 2023-07-21 15:15:47,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 00f6d4ac07f0fcd31f8192e380860bb6: 2023-07-21 15:15:47,513 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=80b437b66b0260165b4cc53d1ffc1dcd, UNASSIGN in 409 msec 2023-07-21 15:15:47,514 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8fd6bb3a8252aaec41c83e0b948615a4, UNASSIGN in 428 msec 2023-07-21 15:15:47,515 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f223f2178366022812329faf0269386e, UNASSIGN in 433 msec 2023-07-21 15:15:47,519 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=da36e046211cb4e43497513d34f32eda, regionState=CLOSED 2023-07-21 15:15:47,520 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689952544865.da36e046211cb4e43497513d34f32eda.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689952547519"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547519"}]},"ts":"1689952547519"} 2023-07-21 15:15:47,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=52 2023-07-21 15:15:47,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=52, state=SUCCESS; CloseRegionProcedure 1b53823572d856ffa6583cbccbf3885d, server=jenkins-hbase17.apache.org,41299,1689952542769 in 301 msec 2023-07-21 15:15:47,523 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=47 2023-07-21 15:15:47,523 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=47, state=SUCCESS; CloseRegionProcedure 9dfab0ef5bbbf401fff5d78540aa51fb, server=jenkins-hbase17.apache.org,39253,1689952540479 in 396 msec 2023-07-21 15:15:47,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:47,527 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=00f6d4ac07f0fcd31f8192e380860bb6, regionState=CLOSED 2023-07-21 15:15:47,527 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952547526"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952547526"}]},"ts":"1689952547526"} 2023-07-21 15:15:47,542 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9dfab0ef5bbbf401fff5d78540aa51fb, UNASSIGN in 455 msec 2023-07-21 15:15:47,551 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=1b53823572d856ffa6583cbccbf3885d, UNASSIGN in 452 msec 2023-07-21 15:15:47,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=54 2023-07-21 15:15:47,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=54, state=SUCCESS; CloseRegionProcedure da36e046211cb4e43497513d34f32eda, server=jenkins-hbase17.apache.org,38527,1689952536414 in 354 msec 2023-07-21 15:15:47,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=50 2023-07-21 15:15:47,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=50, state=SUCCESS; CloseRegionProcedure 00f6d4ac07f0fcd31f8192e380860bb6, server=jenkins-hbase17.apache.org,38527,1689952536414 in 305 msec 2023-07-21 15:15:47,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=da36e046211cb4e43497513d34f32eda, UNASSIGN in 484 msec 2023-07-21 15:15:47,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-21 15:15:47,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=00f6d4ac07f0fcd31f8192e380860bb6, UNASSIGN in 514 msec 2023-07-21 15:15:47,611 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952547611"}]},"ts":"1689952547611"} 2023-07-21 15:15:47,617 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-21 15:15:47,621 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-21 15:15:47,653 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 584 msec 2023-07-21 15:15:47,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-21 15:15:47,694 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 45 completed 2023-07-21 15:15:47,695 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCreateMultiRegion 2023-07-21 15:15:47,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,701 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=66, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-21 15:15:47,702 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=66, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:47,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:47,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:47,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:47,729 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:47,745 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/recovered.edits] 2023-07-21 15:15:47,749 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/recovered.edits] 2023-07-21 15:15:47,749 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/recovered.edits] 2023-07-21 15:15:47,750 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/recovered.edits] 2023-07-21 15:15:47,751 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/recovered.edits] 2023-07-21 15:15:47,751 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/recovered.edits] 2023-07-21 15:15:47,751 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/recovered.edits] 2023-07-21 15:15:47,752 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/recovered.edits] 2023-07-21 15:15:47,795 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4/recovered.edits/4.seqid 2023-07-21 15:15:47,797 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/8fd6bb3a8252aaec41c83e0b948615a4 2023-07-21 15:15:47,797 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:47,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-21 15:15:47,817 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d/recovered.edits/4.seqid 2023-07-21 15:15:47,818 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66/recovered.edits/4.seqid 2023-07-21 15:15:47,822 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993/recovered.edits/4.seqid 2023-07-21 15:15:47,822 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6/recovered.edits/4.seqid 2023-07-21 15:15:47,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/7a8815e28c428ab74e9401574fd3fc66 2023-07-21 15:15:47,823 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:47,823 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/1b53823572d856ffa6583cbccbf3885d 2023-07-21 15:15:47,826 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/recovered.edits] 2023-07-21 15:15:47,827 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb/recovered.edits/4.seqid 2023-07-21 15:15:47,827 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992/recovered.edits/4.seqid 2023-07-21 15:15:47,827 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/b333d76f8a289fa3e9d3a85ccac2d993 2023-07-21 15:15:47,828 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/00f6d4ac07f0fcd31f8192e380860bb6 2023-07-21 15:15:47,829 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/9dfab0ef5bbbf401fff5d78540aa51fb 2023-07-21 15:15:47,831 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/recovered.edits] 2023-07-21 15:15:47,831 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f447ee9cb6fc700f8cfecb803531b992 2023-07-21 15:15:47,841 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e/recovered.edits/4.seqid 2023-07-21 15:15:47,843 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/f223f2178366022812329faf0269386e 2023-07-21 15:15:47,847 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda/recovered.edits/4.seqid 2023-07-21 15:15:47,849 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/da36e046211cb4e43497513d34f32eda 2023-07-21 15:15:47,849 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd/recovered.edits/4.seqid 2023-07-21 15:15:47,850 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateMultiRegion/80b437b66b0260165b4cc53d1ffc1dcd 2023-07-21 15:15:47,850 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-21 15:15:47,861 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=66, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,870 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-21 15:15:47,878 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-21 15:15:47,881 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=66, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689952544865.f223f2178366022812329faf0269386e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,882 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,883 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,883 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689952544865.da36e046211cb4e43497513d34f32eda.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,883 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952547882"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,895 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-21 15:15:47,895 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f223f2178366022812329faf0269386e, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689952544865.f223f2178366022812329faf0269386e.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => 9dfab0ef5bbbf401fff5d78540aa51fb, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689952544865.9dfab0ef5bbbf401fff5d78540aa51fb.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => 7a8815e28c428ab74e9401574fd3fc66, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689952544865.7a8815e28c428ab74e9401574fd3fc66.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => b333d76f8a289fa3e9d3a85ccac2d993, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689952544865.b333d76f8a289fa3e9d3a85ccac2d993.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => 00f6d4ac07f0fcd31f8192e380860bb6, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689952544865.00f6d4ac07f0fcd31f8192e380860bb6.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => f447ee9cb6fc700f8cfecb803531b992, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689952544865.f447ee9cb6fc700f8cfecb803531b992.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => 1b53823572d856ffa6583cbccbf3885d, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689952544865.1b53823572d856ffa6583cbccbf3885d.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => 8fd6bb3a8252aaec41c83e0b948615a4, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689952544865.8fd6bb3a8252aaec41c83e0b948615a4.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => da36e046211cb4e43497513d34f32eda, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689952544865.da36e046211cb4e43497513d34f32eda.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => 80b437b66b0260165b4cc53d1ffc1dcd, NAME => 'Group_testCreateMultiRegion,,1689952544865.80b437b66b0260165b4cc53d1ffc1dcd.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-21 15:15:47,895 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-21 15:15:47,895 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952547895"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:47,913 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-21 15:15:47,920 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=66, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:15:47,924 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 225 msec 2023-07-21 15:15:48,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-21 15:15:48,026 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 66 completed 2023-07-21 15:15:48,050 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,050 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:48,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:48,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:48,055 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:48,055 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:48,063 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:48,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:48,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:48,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:48,100 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:48,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:48,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:48,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:48,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:48,113 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:48,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:48,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:48,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 250 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953748135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:48,137 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:48,139 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:48,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,140 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:48,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:48,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:48,173 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=501 (was 493) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1664477655_17 at /127.0.0.1:43498 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dd441b-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_264418683_17 at /127.0.0.1:59924 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa7c10e1-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=794 (was 767) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=843 (was 908), ProcessCount=186 (was 186), AvailableMemoryMB=2011 (was 2213) 2023-07-21 15:15:48,174 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-21 15:15:48,199 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=501, OpenFileDescriptor=794, MaxFileDescriptor=60000, SystemLoadAverage=843, ProcessCount=186, AvailableMemoryMB=2011 2023-07-21 15:15:48,199 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-21 15:15:48,199 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-21 15:15:48,205 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,205 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:48,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:48,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:48,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:48,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:48,210 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:48,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:48,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:48,217 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:48,221 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:48,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:48,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:48,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:48,227 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:48,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:48,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:48,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 278 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953748233, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:48,234 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:48,235 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:48,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,236 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:48,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:48,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:48,237 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-21 15:15:48,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:48,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:48,239 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup appInfo 2023-07-21 15:15:48,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:48,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:48,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:48,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:48,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:48,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:48,249 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:48,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36355] to rsgroup appInfo 2023-07-21 15:15:48,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:48,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:48,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:48,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:48,256 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(238): Moving server region 603dc738ccec189e3bde34ff84c46389, which do not belong to RSGroup appInfo 2023-07-21 15:15:48,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=67, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE 2023-07-21 15:15:48,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 15:15:48,258 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE 2023-07-21 15:15:48,259 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:48,259 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952548259"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952548259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952548259"}]},"ts":"1689952548259"} 2023-07-21 15:15:48,261 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=67, state=RUNNABLE; CloseRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:48,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 603dc738ccec189e3bde34ff84c46389, disabling compactions & flushes 2023-07-21 15:15:48,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. after waiting 0 ms 2023-07-21 15:15:48,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 603dc738ccec189e3bde34ff84c46389 1/1 column families, dataSize=7.06 KB heapSize=11.56 KB 2023-07-21 15:15:48,450 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.06 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:15:48,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:15:48,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/6ca2192a296d47859e18b9a84011d90b as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:15:48,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:15:48,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b, entries=10, sequenceid=31, filesize=5.4 K 2023-07-21 15:15:48,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~7.06 KB/7225, heapSize ~11.55 KB/11824, currentSize=0 B/0 for 603dc738ccec189e3bde34ff84c46389 in 71ms, sequenceid=31, compaction requested=false 2023-07-21 15:15:48,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 15:15:48,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:15:48,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:15:48,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 603dc738ccec189e3bde34ff84c46389 move to jenkins-hbase17.apache.org,41299,1689952542769 record at close sequenceid=31 2023-07-21 15:15:48,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,510 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=CLOSED 2023-07-21 15:15:48,510 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952548510"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952548510"}]},"ts":"1689952548510"} 2023-07-21 15:15:48,517 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=67 2023-07-21 15:15:48,517 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=67, state=SUCCESS; CloseRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,36355,1689952536596 in 251 msec 2023-07-21 15:15:48,518 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=67, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,41299,1689952542769; forceNewPlan=false, retain=false 2023-07-21 15:15:48,669 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:15:48,670 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:48,670 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952548669"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952548669"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952548669"}]},"ts":"1689952548669"} 2023-07-21 15:15:48,674 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=67, state=RUNNABLE; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,41299,1689952542769}] 2023-07-21 15:15:48,836 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 603dc738ccec189e3bde34ff84c46389, NAME => 'hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:48,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:15:48,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. service=MultiRowMutationService 2023-07-21 15:15:48,838 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:15:48,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:48,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,842 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,844 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:15:48,844 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:15:48,845 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 603dc738ccec189e3bde34ff84c46389 columnFamilyName m 2023-07-21 15:15:48,858 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:15:48,858 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:15:48,858 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(310): Store=603dc738ccec189e3bde34ff84c46389/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:48,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:48,864 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 603dc738ccec189e3bde34ff84c46389; next sequenceid=35; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6ca6befb, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:48,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:15:48,865 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., pid=69, masterSystemTime=1689952548831 2023-07-21 15:15:48,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,867 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:48,867 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, openSeqNum=35, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:48,867 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952548867"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952548867"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952548867"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952548867"}]},"ts":"1689952548867"} 2023-07-21 15:15:48,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=67 2023-07-21 15:15:48,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=67, state=SUCCESS; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,41299,1689952542769 in 195 msec 2023-07-21 15:15:48,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=67, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE in 615 msec 2023-07-21 15:15:49,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure.ProcedureSyncWait(216): waitFor pid=67 2023-07-21 15:15:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36355,1689952536596] are moved back to default 2023-07-21 15:15:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-21 15:15:49,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:49,264 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36355] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:56146 deadline: 1689952609264, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=41299 startCode=1689952542769. As of locationSeqNum=31. 2023-07-21 15:15:49,372 DEBUG [hconnection-0x46251d71-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:49,382 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55358, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:49,389 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:49,390 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:49,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=appInfo 2023-07-21 15:15:49,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:49,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-21 15:15:49,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=70, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:49,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=70 2023-07-21 15:15:49,411 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:49,414 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-21 15:15:49,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=70 2023-07-21 15:15:49,515 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:49,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:49,519 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=71, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:49,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 71 2023-07-21 15:15:49,520 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36355] ipc.CallRunner(144): callId: 185 service: ClientService methodName: ExecService size: 542 connection: 136.243.18.41:56136 deadline: 1689952609520, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=41299 startCode=1689952542769. As of locationSeqNum=31. 2023-07-21 15:15:49,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-21 15:15:49,524 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:15:49,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-21 15:15:49,624 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:49,625 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:55362, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:49,628 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:49,629 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:49,629 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:49,630 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:49,635 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=71, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:49,637 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:49,638 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca empty. 2023-07-21 15:15:49,640 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:49,641 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-21 15:15:49,689 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:49,691 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => a0e4aea4d1b53c9f6610796f01f50bca, NAME => 'Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:49,705 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:49,706 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing a0e4aea4d1b53c9f6610796f01f50bca, disabling compactions & flushes 2023-07-21 15:15:49,706 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:49,706 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:49,706 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. after waiting 0 ms 2023-07-21 15:15:49,706 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:49,706 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:49,706 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for a0e4aea4d1b53c9f6610796f01f50bca: 2023-07-21 15:15:49,708 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=71, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:49,709 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689952549709"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952549709"}]},"ts":"1689952549709"} 2023-07-21 15:15:49,711 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:49,712 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=71, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:49,712 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952549712"}]},"ts":"1689952549712"} 2023-07-21 15:15:49,714 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-21 15:15:49,716 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, ASSIGN}] 2023-07-21 15:15:49,720 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, ASSIGN 2023-07-21 15:15:49,721 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=72, ppid=71, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:49,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-21 15:15:49,873 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=a0e4aea4d1b53c9f6610796f01f50bca, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:49,873 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689952549873"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952549873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952549873"}]},"ts":"1689952549873"} 2023-07-21 15:15:49,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; OpenRegionProcedure a0e4aea4d1b53c9f6610796f01f50bca, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:50,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a0e4aea4d1b53c9f6610796f01f50bca, NAME => 'Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:50,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:50,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,047 INFO [StoreOpener-a0e4aea4d1b53c9f6610796f01f50bca-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,049 DEBUG [StoreOpener-a0e4aea4d1b53c9f6610796f01f50bca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/f 2023-07-21 15:15:50,049 DEBUG [StoreOpener-a0e4aea4d1b53c9f6610796f01f50bca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/f 2023-07-21 15:15:50,049 INFO [StoreOpener-a0e4aea4d1b53c9f6610796f01f50bca-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a0e4aea4d1b53c9f6610796f01f50bca columnFamilyName f 2023-07-21 15:15:50,050 INFO [StoreOpener-a0e4aea4d1b53c9f6610796f01f50bca-1] regionserver.HStore(310): Store=a0e4aea4d1b53c9f6610796f01f50bca/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:50,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:50,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a0e4aea4d1b53c9f6610796f01f50bca; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11490149760, jitterRate=0.07010358572006226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a0e4aea4d1b53c9f6610796f01f50bca: 2023-07-21 15:15:50,059 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca., pid=73, masterSystemTime=1689952550040 2023-07-21 15:15:50,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,062 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=a0e4aea4d1b53c9f6610796f01f50bca, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:50,062 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689952550062"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952550062"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952550062"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952550062"}]},"ts":"1689952550062"} 2023-07-21 15:15:50,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-21 15:15:50,067 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; OpenRegionProcedure a0e4aea4d1b53c9f6610796f01f50bca, server=jenkins-hbase17.apache.org,36355,1689952536596 in 187 msec 2023-07-21 15:15:50,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=71 2023-07-21 15:15:50,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=71, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, ASSIGN in 350 msec 2023-07-21 15:15:50,072 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=71, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:50,072 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952550072"}]},"ts":"1689952550072"} 2023-07-21 15:15:50,074 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-21 15:15:50,084 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=71, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:50,086 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 569 msec 2023-07-21 15:15:50,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-21 15:15:50,126 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 71 completed 2023-07-21 15:15:50,127 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:50,132 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,133 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,138 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952550138"}]},"ts":"1689952550138"} 2023-07-21 15:15:50,140 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-21 15:15:50,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 15:15:50,142 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-21 15:15:50,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, UNASSIGN}] 2023-07-21 15:15:50,145 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, UNASSIGN 2023-07-21 15:15:50,148 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=a0e4aea4d1b53c9f6610796f01f50bca, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:50,148 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689952550148"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952550148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952550148"}]},"ts":"1689952550148"} 2023-07-21 15:15:50,153 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure a0e4aea4d1b53c9f6610796f01f50bca, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:50,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 15:15:50,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a0e4aea4d1b53c9f6610796f01f50bca, disabling compactions & flushes 2023-07-21 15:15:50,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. after waiting 0 ms 2023-07-21 15:15:50,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:50,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca. 2023-07-21 15:15:50,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a0e4aea4d1b53c9f6610796f01f50bca: 2023-07-21 15:15:50,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,316 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=a0e4aea4d1b53c9f6610796f01f50bca, regionState=CLOSED 2023-07-21 15:15:50,316 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689952550316"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952550316"}]},"ts":"1689952550316"} 2023-07-21 15:15:50,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-21 15:15:50,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure a0e4aea4d1b53c9f6610796f01f50bca, server=jenkins-hbase17.apache.org,36355,1689952536596 in 165 msec 2023-07-21 15:15:50,322 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=74 2023-07-21 15:15:50,323 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=74, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=a0e4aea4d1b53c9f6610796f01f50bca, UNASSIGN in 178 msec 2023-07-21 15:15:50,323 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952550323"}]},"ts":"1689952550323"} 2023-07-21 15:15:50,325 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-21 15:15:50,326 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-21 15:15:50,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 195 msec 2023-07-21 15:15:50,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 15:15:50,444 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 74 completed 2023-07-21 15:15:50,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,448 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-21 15:15:50,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:50,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:50,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:50,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:50,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 15:15:50,457 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,463 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,469 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/recovered.edits] 2023-07-21 15:15:50,478 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca/recovered.edits/4.seqid 2023-07-21 15:15:50,479 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_foo/Group_testCreateAndAssign/a0e4aea4d1b53c9f6610796f01f50bca 2023-07-21 15:15:50,479 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-21 15:15:50,483 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,486 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-21 15:15:50,488 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-21 15:15:50,490 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,490 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-21 15:15:50,490 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952550490"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:50,492 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:15:50,492 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a0e4aea4d1b53c9f6610796f01f50bca, NAME => 'Group_foo:Group_testCreateAndAssign,,1689952549513.a0e4aea4d1b53c9f6610796f01f50bca.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:15:50,492 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-21 15:15:50,492 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952550492"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:50,494 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-21 15:15:50,496 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:15:50,497 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 51 msec 2023-07-21 15:15:50,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 15:15:50,558 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 77 completed 2023-07-21 15:15:50,571 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete Group_foo 2023-07-21 15:15:50,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:50,582 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=78, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:50,782 INFO [AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData-prefix:jenkins-hbase17.apache.org,43019,1689952533620] wal.AbstractFSWAL(1141): Slow sync cost: 195 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:15:50,782 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=78, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:50,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-21 15:15:50,788 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=78, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:50,790 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 15:15:50,790 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:50,791 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=78, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:50,800 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=78, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:15:50,802 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 228 msec 2023-07-21 15:15:50,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-21 15:15:50,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:50,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:50,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:50,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:50,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:50,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:50,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:50,895 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:50,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:50,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:50,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:15:50,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:50,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:50,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:50,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:50,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36355] to rsgroup default 2023-07-21 15:15:50,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:50,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:50,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-21 15:15:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36355,1689952536596] are moved back to appInfo 2023-07-21 15:15:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-21 15:15:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:50,916 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup appInfo 2023-07-21 15:15:50,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:50,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:50,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:50,925 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:50,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:50,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:50,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:50,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:50,932 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:50,936 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:50,936 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:50,938 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:50,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:50,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 367 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953750938, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:50,939 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:50,940 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:50,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:50,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:50,941 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:50,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:50,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:50,961 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=509 (was 501) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1332363002_17 at /127.0.0.1:45296 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_264418683_17 at /127.0.0.1:43498 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_264418683_17 at /127.0.0.1:47922 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1332363002_17 at /127.0.0.1:45300 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 794) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=848 (was 843) - SystemLoadAverage LEAK? -, ProcessCount=186 (was 186), AvailableMemoryMB=1863 (was 2011) 2023-07-21 15:15:50,961 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-21 15:15:50,980 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=509, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=848, ProcessCount=186, AvailableMemoryMB=1862 2023-07-21 15:15:50,981 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-21 15:15:50,981 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-21 15:15:50,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:50,989 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:50,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:50,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:50,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:50,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:50,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:50,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:50,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:50,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:50,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:51,003 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:51,005 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:51,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:51,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:51,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:51,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:51,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:51,017 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:51,019 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:51,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:51,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 395 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953751019, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:51,020 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:51,022 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:51,023 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:51,024 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:51,024 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:51,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:51,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:51,027 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:51,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:51,030 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=79, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:51,031 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 79 2023-07-21 15:15:51,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-21 15:15:51,033 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:51,033 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:51,034 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:51,035 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=79, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:51,037 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 2023-07-21 15:15:51,038 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 empty. 2023-07-21 15:15:51,039 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 2023-07-21 15:15:51,039 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-21 15:15:51,054 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:51,055 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 879707a1189393762563699073831179, NAME => 'Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:51,067 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:51,067 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing 879707a1189393762563699073831179, disabling compactions & flushes 2023-07-21 15:15:51,067 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,067 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,067 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. after waiting 0 ms 2023-07-21 15:15:51,067 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,067 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,068 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 879707a1189393762563699073831179: 2023-07-21 15:15:51,070 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=79, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:51,071 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952551071"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952551071"}]},"ts":"1689952551071"} 2023-07-21 15:15:51,072 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:51,073 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=79, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:51,073 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952551073"}]},"ts":"1689952551073"} 2023-07-21 15:15:51,074 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-21 15:15:51,076 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:51,077 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:51,077 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:51,077 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:51,077 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:15:51,077 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:51,077 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, ASSIGN}] 2023-07-21 15:15:51,079 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, ASSIGN 2023-07-21 15:15:51,080 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=80, ppid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:51,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-21 15:15:51,230 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:15:51,231 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=80 updating hbase:meta row=879707a1189393762563699073831179, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:51,231 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952551231"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952551231"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952551231"}]},"ts":"1689952551231"} 2023-07-21 15:15:51,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=80, state=RUNNABLE; OpenRegionProcedure 879707a1189393762563699073831179, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:51,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-21 15:15:51,390 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 879707a1189393762563699073831179, NAME => 'Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:51,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop 879707a1189393762563699073831179 2023-07-21 15:15:51,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:51,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 879707a1189393762563699073831179 2023-07-21 15:15:51,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 879707a1189393762563699073831179 2023-07-21 15:15:51,393 INFO [StoreOpener-879707a1189393762563699073831179-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 879707a1189393762563699073831179 2023-07-21 15:15:51,394 DEBUG [StoreOpener-879707a1189393762563699073831179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/cf 2023-07-21 15:15:51,395 DEBUG [StoreOpener-879707a1189393762563699073831179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/cf 2023-07-21 15:15:51,395 INFO [StoreOpener-879707a1189393762563699073831179-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 879707a1189393762563699073831179 columnFamilyName cf 2023-07-21 15:15:51,395 INFO [StoreOpener-879707a1189393762563699073831179-1] regionserver.HStore(310): Store=879707a1189393762563699073831179/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:51,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 2023-07-21 15:15:51,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 2023-07-21 15:15:51,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 879707a1189393762563699073831179 2023-07-21 15:15:51,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:51,403 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 879707a1189393762563699073831179; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11725035520, jitterRate=0.0919790267944336}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:51,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 879707a1189393762563699073831179: 2023-07-21 15:15:51,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179., pid=81, masterSystemTime=1689952551385 2023-07-21 15:15:51,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,406 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=80 updating hbase:meta row=879707a1189393762563699073831179, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:51,406 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952551406"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952551406"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952551406"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952551406"}]},"ts":"1689952551406"} 2023-07-21 15:15:51,410 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=80 2023-07-21 15:15:51,410 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=80, state=SUCCESS; OpenRegionProcedure 879707a1189393762563699073831179, server=jenkins-hbase17.apache.org,36355,1689952536596 in 175 msec 2023-07-21 15:15:51,414 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-21 15:15:51,414 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, ASSIGN in 333 msec 2023-07-21 15:15:51,414 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=79, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:51,415 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952551414"}]},"ts":"1689952551414"} 2023-07-21 15:15:51,416 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-21 15:15:51,418 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=79, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:51,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 390 msec 2023-07-21 15:15:51,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=79 2023-07-21 15:15:51,635 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 79 completed 2023-07-21 15:15:51,636 DEBUG [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-21 15:15:51,636 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:51,642 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-21 15:15:51,642 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:51,642 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-21 15:15:51,643 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:51,652 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-21 15:15:51,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCreateAndDrop 2023-07-21 15:15:51,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:51,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-21 15:15:51,670 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952551670"}]},"ts":"1689952551670"} 2023-07-21 15:15:51,675 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-21 15:15:51,678 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-21 15:15:51,680 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, UNASSIGN}] 2023-07-21 15:15:51,682 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, UNASSIGN 2023-07-21 15:15:51,689 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=83 updating hbase:meta row=879707a1189393762563699073831179, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:51,689 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952551689"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952551689"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952551689"}]},"ts":"1689952551689"} 2023-07-21 15:15:51,693 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=83, state=RUNNABLE; CloseRegionProcedure 879707a1189393762563699073831179, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:51,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-21 15:15:51,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 879707a1189393762563699073831179 2023-07-21 15:15:51,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 879707a1189393762563699073831179, disabling compactions & flushes 2023-07-21 15:15:51,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. after waiting 0 ms 2023-07-21 15:15:51,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:51,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179. 2023-07-21 15:15:51,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 879707a1189393762563699073831179: 2023-07-21 15:15:51,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 879707a1189393762563699073831179 2023-07-21 15:15:51,873 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=83 updating hbase:meta row=879707a1189393762563699073831179, regionState=CLOSED 2023-07-21 15:15:51,873 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952551872"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952551872"}]},"ts":"1689952551872"} 2023-07-21 15:15:51,882 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=83 2023-07-21 15:15:51,882 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=83, state=SUCCESS; CloseRegionProcedure 879707a1189393762563699073831179, server=jenkins-hbase17.apache.org,36355,1689952536596 in 184 msec 2023-07-21 15:15:51,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-21 15:15:51,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=879707a1189393762563699073831179, UNASSIGN in 202 msec 2023-07-21 15:15:51,886 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952551886"}]},"ts":"1689952551886"} 2023-07-21 15:15:51,888 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-21 15:15:51,893 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-21 15:15:51,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 245 msec 2023-07-21 15:15:51,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-21 15:15:51,981 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 82 completed 2023-07-21 15:15:51,982 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCreateAndDrop 2023-07-21 15:15:51,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=85, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:51,991 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=85, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:51,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-21 15:15:51,993 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=85, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:51,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:51,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:51,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:51,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 15:15:52,001 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 2023-07-21 15:15:52,003 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/cf, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/recovered.edits] 2023-07-21 15:15:52,009 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179/recovered.edits/4.seqid 2023-07-21 15:15:52,010 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCreateAndDrop/879707a1189393762563699073831179 2023-07-21 15:15:52,010 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-21 15:15:52,013 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=85, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:52,015 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-21 15:15:52,021 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-21 15:15:52,022 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=85, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:52,023 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-21 15:15:52,023 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952552023"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:52,025 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:15:52,025 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 879707a1189393762563699073831179, NAME => 'Group_testCreateAndDrop,,1689952551027.879707a1189393762563699073831179.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:15:52,025 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-21 15:15:52,025 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952552025"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:52,027 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-21 15:15:52,028 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=85, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:15:52,033 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 46 msec 2023-07-21 15:15:52,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 15:15:52,101 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 85 completed 2023-07-21 15:15:52,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:52,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:52,108 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:52,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:52,108 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:52,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:52,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:52,110 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:52,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:52,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:52,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:52,122 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:52,124 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:52,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:52,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:52,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:52,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:52,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:52,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:52,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:52,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:52,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 454 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953752154, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:52,156 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:52,161 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:52,165 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:52,165 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:52,166 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:52,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:52,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:52,205 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=509 (was 509), OpenFileDescriptor=801 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=848 (was 848), ProcessCount=186 (was 186), AvailableMemoryMB=1819 (was 1862) 2023-07-21 15:15:52,205 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-21 15:15:52,235 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=509, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=848, ProcessCount=186, AvailableMemoryMB=1818 2023-07-21 15:15:52,235 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-21 15:15:52,236 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-21 15:15:52,240 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:52,240 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:52,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:52,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:52,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:52,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:52,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:52,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:52,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:52,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:52,261 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:52,269 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:52,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:52,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:52,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:52,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:52,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:52,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:52,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:52,310 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:52,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:52,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 482 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953752310, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:52,311 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:52,313 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:52,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:52,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:52,316 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:52,319 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:52,319 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:52,323 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:52,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=86, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:52,331 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=86, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:52,332 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 86 2023-07-21 15:15:52,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=86 2023-07-21 15:15:52,340 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:52,343 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:52,343 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:52,346 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=86, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:52,348 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,348 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d empty. 2023-07-21 15:15:52,349 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,349 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-21 15:15:52,426 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:52,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=86 2023-07-21 15:15:52,447 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => c1a43e341958905d8067e3dc9ffd8c7d, NAME => 'Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:52,493 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:52,493 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing c1a43e341958905d8067e3dc9ffd8c7d, disabling compactions & flushes 2023-07-21 15:15:52,493 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,493 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,493 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. after waiting 0 ms 2023-07-21 15:15:52,493 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,493 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,493 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for c1a43e341958905d8067e3dc9ffd8c7d: 2023-07-21 15:15:52,500 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=86, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:52,502 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952552501"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952552501"}]},"ts":"1689952552501"} 2023-07-21 15:15:52,505 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:52,507 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=86, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:52,507 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952552507"}]},"ts":"1689952552507"} 2023-07-21 15:15:52,509 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-21 15:15:52,513 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:52,513 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:52,513 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:52,513 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:52,514 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:15:52,514 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:52,514 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=87, ppid=86, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, ASSIGN}] 2023-07-21 15:15:52,522 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, ppid=86, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, ASSIGN 2023-07-21 15:15:52,524 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, ppid=86, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:52,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=86 2023-07-21 15:15:52,674 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:15:52,675 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=c1a43e341958905d8067e3dc9ffd8c7d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:52,676 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952552675"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952552675"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952552675"}]},"ts":"1689952552675"} 2023-07-21 15:15:52,677 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; OpenRegionProcedure c1a43e341958905d8067e3dc9ffd8c7d, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:52,833 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c1a43e341958905d8067e3dc9ffd8c7d, NAME => 'Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:52,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:52,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,835 INFO [StoreOpener-c1a43e341958905d8067e3dc9ffd8c7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,837 DEBUG [StoreOpener-c1a43e341958905d8067e3dc9ffd8c7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/test 2023-07-21 15:15:52,837 DEBUG [StoreOpener-c1a43e341958905d8067e3dc9ffd8c7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/test 2023-07-21 15:15:52,838 INFO [StoreOpener-c1a43e341958905d8067e3dc9ffd8c7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c1a43e341958905d8067e3dc9ffd8c7d columnFamilyName test 2023-07-21 15:15:52,838 INFO [StoreOpener-c1a43e341958905d8067e3dc9ffd8c7d-1] regionserver.HStore(310): Store=c1a43e341958905d8067e3dc9ffd8c7d/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:52,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:52,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:52,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened c1a43e341958905d8067e3dc9ffd8c7d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11239302720, jitterRate=0.046741634607315063}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:52,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for c1a43e341958905d8067e3dc9ffd8c7d: 2023-07-21 15:15:52,848 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d., pid=88, masterSystemTime=1689952552829 2023-07-21 15:15:52,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:52,850 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=c1a43e341958905d8067e3dc9ffd8c7d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:52,850 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952552850"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952552850"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952552850"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952552850"}]},"ts":"1689952552850"} 2023-07-21 15:15:52,854 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-21 15:15:52,854 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; OpenRegionProcedure c1a43e341958905d8067e3dc9ffd8c7d, server=jenkins-hbase17.apache.org,36355,1689952536596 in 175 msec 2023-07-21 15:15:52,855 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=87, resume processing ppid=86 2023-07-21 15:15:52,855 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, ppid=86, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, ASSIGN in 340 msec 2023-07-21 15:15:52,856 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=86, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:52,856 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952552856"}]},"ts":"1689952552856"} 2023-07-21 15:15:52,857 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-21 15:15:52,859 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=86, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:52,863 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 536 msec 2023-07-21 15:15:52,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=86 2023-07-21 15:15:52,945 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 86 completed 2023-07-21 15:15:52,945 DEBUG [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-21 15:15:52,945 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:52,955 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-21 15:15:52,955 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:52,955 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-21 15:15:52,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1583): Client=jenkins//136.243.18.41 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-21 15:15:52,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1689952552964 (current time:1689952552964). 2023-07-21 15:15:52,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-21 15:15:52,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] zookeeper.ReadOnlyZKClient(139): Connect 0x37e71225 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:15:52,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45a750ae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:15:52,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:15:52,976 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37204, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:15:52,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x37e71225 to 127.0.0.1:62052 2023-07-21 15:15:52,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:52,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-21 15:15:52,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-21 15:15:52,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=89, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 15:15:52,995 DEBUG [PEWorker-4] locking.LockProcedure(309): LOCKED pid=89, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 15:15:52,996 INFO [PEWorker-4] procedure2.TimeoutExecutorThread(81): ADDED pid=89, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1689953152996 2023-07-21 15:15:52,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-21 15:15:52,996 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-21 15:15:52,997 DEBUG [PEWorker-5] locking.LockProcedure(242): UNLOCKED pid=89, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 15:15:52,998 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 15:15:52,999 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 8 msec 2023-07-21 15:15:52,999 DEBUG [PEWorker-5] locking.LockProcedure(309): LOCKED pid=90, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 15:15:53,000 INFO [PEWorker-5] procedure2.TimeoutExecutorThread(81): ADDED pid=90, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1689953153000 2023-07-21 15:15:53,002 DEBUG [Listener at localhost.localdomain/38883] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-21 15:15:53,002 DEBUG [Listener at localhost.localdomain/38883] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-21 15:15:53,021 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-21 15:15:53,023 INFO [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-21 15:15:53,023 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 15:15:53,024 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-21 15:15:53,024 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-21 15:15:53,025 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,025 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,026 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,026 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,026 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,026 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,026 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,027 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,026 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,027 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,027 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,027 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,027 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,027 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,027 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,027 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-21 15:15:53,027 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,027 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,028 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,028 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,028 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 15:15:53,028 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,028 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,028 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 15:15:53,032 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,032 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 15:15:53,032 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,033 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 15:15:53,033 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,033 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 15:15:53,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,033 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-21 15:15:53,033 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,034 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 15:15:53,034 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 15:15:53,034 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-21 15:15:53,034 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 15:15:53,033 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 15:15:53,035 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-21 15:15:53,035 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 15:15:53,035 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,38527,1689952536414' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 15:15:53,035 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 15:15:53,036 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 15:15:53,036 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 15:15:53,036 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 15:15:53,036 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 15:15:53,036 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-21 15:15:53,037 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 15:15:53,037 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 15:15:53,036 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,41299,1689952542769' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 15:15:53,037 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,39253,1689952540479' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 15:15:53,037 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-21 15:15:53,038 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-21 15:15:53,038 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-21 15:15:53,038 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,36355,1689952536596' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-21 15:15:53,038 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,041 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,041 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,041 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,041 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 15:15:53,042 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-21 15:15:53,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,042 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-21 15:15:53,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-21 15:15:53,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-21 15:15:53,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 15:15:53,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,044 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,044 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,044 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-21 15:15:53,045 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,36355,1689952536596' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-21 15:15:53,045 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-21 15:15:53,045 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@541e5ee1[Count = 0] remaining members to acquire global barrier 2023-07-21 15:15:53,045 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,045 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,046 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 15:15:53,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 15:15:53,046 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,046 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,39253,1689952540479' in zk 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,38527,1689952536414' in zk 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 15:15:53,046 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,41299,1689952542769' in zk 2023-07-21 15:15:53,047 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-21 15:15:53,047 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-21 15:15:53,047 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:53,047 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 15:15:53,047 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 15:15:53,047 DEBUG [member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 15:15:53,048 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 15:15:53,048 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 15:15:53,048 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 15:15:53,048 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 15:15:53,048 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. started... 2023-07-21 15:15:53,049 DEBUG [member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 15:15:53,048 DEBUG [member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 15:15:53,050 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for c1a43e341958905d8067e3dc9ffd8c7d: 2023-07-21 15:15:53,051 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-21 15:15:53,055 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-21 15:15:53,059 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-21 15:15:53,072 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. completed. 2023-07-21 15:15:53,072 DEBUG [rs(jenkins-hbase17.apache.org,36355,1689952536596)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:53,073 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-21 15:15:53,073 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-21 15:15:53,073 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,073 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-21 15:15:53,073 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase17.apache.org,36355,1689952536596' in zk 2023-07-21 15:15:53,074 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-21 15:15:53,074 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,074 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 15:15:53,074 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-21 15:15:53,075 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-21 15:15:53,074 DEBUG [member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-21 15:15:53,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-21 15:15:53,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-21 15:15:53,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 15:15:53,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,078 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-21 15:15:53,078 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 15:15:53,078 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,079 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase17.apache.org,36355,1689952536596': 2023-07-21 15:15:53,081 INFO [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-21 15:15:53,081 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-21 15:15:53,081 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,36355,1689952536596' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-21 15:15:53,081 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-21 15:15:53,081 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-21 15:15:53,081 INFO [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,082 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,082 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,082 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-21 15:15:53,083 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,083 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,083 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,083 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,084 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 15:15:53,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-21 15:15:53,084 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,085 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,085 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 15:15:53,085 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,085 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,086 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,086 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,086 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,086 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-21 15:15:53,088 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-21 15:15:53,088 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,088 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,088 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,089 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,089 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,089 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,089 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,089 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,092 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,092 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,092 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,092 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,092 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,093 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:53,093 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,093 DEBUG [(jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-21 15:15:53,093 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-21 15:15:53,093 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,093 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,093 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,093 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,093 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:53,095 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:53,095 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,095 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,095 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,095 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-21 15:15:53,095 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-21 15:15:53,095 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,095 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:15:53,095 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-21 15:15:53,095 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,097 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,097 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-21 15:15:53,097 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:15:53,103 DEBUG [Listener at localhost.localdomain/38883] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-21 15:15:53,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-21 15:15:53,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-21 15:15:53,108 DEBUG [Listener at localhost.localdomain/38883] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-21 15:15:53,157 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,192 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-21 15:15:53,193 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,193 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-21 15:15:53,193 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1689952552996Consolidate snapshot: Group_testCloneSnapshot_snap at 1689952553093 (+97 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1689952553093Writing data manifest for Group_testCloneSnapshot_snap at 1689952553107 (+14 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1689952553140 (+33 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1689952553192 (+52 ms) 2023-07-21 15:15:53,195 DEBUG [PEWorker-1] locking.LockProcedure(242): UNLOCKED pid=90, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 15:15:53,196 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 197 msec 2023-07-21 15:15:53,308 DEBUG [Listener at localhost.localdomain/38883] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-21 15:15:53,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-21 15:15:53,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-21 15:15:53,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-21 15:15:53,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:53,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:53,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:53,330 ERROR [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-21 15:15:53,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689952552964 type: FLUSH version: 2 ttl: 0 ) 2023-07-21 15:15:53,343 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-21 15:15:53,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 15:15:53,364 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:53,370 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689952552964 type: FLUSH version: 2 ttl: 0 2023-07-21 15:15:53,371 DEBUG [PEWorker-3] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-21 15:15:53,372 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(239): region to add: c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:53,372 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(585): clone region=c1a43e341958905d8067e3dc9ffd8c7d as 05457a46c846553f9d7d5d4757f732bf in snapshot Group_testCloneSnapshot_snap 2023-07-21 15:15:53,373 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 05457a46c846553f9d7d5d4757f732bf, NAME => 'Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:53,389 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:53,389 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing 05457a46c846553f9d7d5d4757f732bf, disabling compactions & flushes 2023-07-21 15:15:53,389 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,389 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,389 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. after waiting 0 ms 2023-07-21 15:15:53,389 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,389 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,389 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 05457a46c846553f9d7d5d4757f732bf: 2023-07-21 15:15:53,389 INFO [PEWorker-3] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689952552964 type: FLUSH version: 2 ttl: 0 2023-07-21 15:15:53,390 INFO [PEWorker-3] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-21 15:15:53,393 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689952553393"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952553393"}]},"ts":"1689952553393"} 2023-07-21 15:15:53,395 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:53,396 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952553396"}]},"ts":"1689952553396"} 2023-07-21 15:15:53,397 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-21 15:15:53,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:15:53,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:15:53,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:15:53,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:15:53,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 15:15:53,401 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:15:53,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, ASSIGN}] 2023-07-21 15:15:53,404 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, ASSIGN 2023-07-21 15:15:53,406 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:53,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 15:15:53,556 INFO [jenkins-hbase17:43019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:15:53,557 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=05457a46c846553f9d7d5d4757f732bf, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,558 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689952553557"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952553557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952553557"}]},"ts":"1689952553557"} 2023-07-21 15:15:53,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 05457a46c846553f9d7d5d4757f732bf, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:53,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 15:15:53,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05457a46c846553f9d7d5d4757f732bf, NAME => 'Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:53,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:53,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,720 INFO [StoreOpener-05457a46c846553f9d7d5d4757f732bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,721 DEBUG [StoreOpener-05457a46c846553f9d7d5d4757f732bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/test 2023-07-21 15:15:53,721 DEBUG [StoreOpener-05457a46c846553f9d7d5d4757f732bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/test 2023-07-21 15:15:53,722 INFO [StoreOpener-05457a46c846553f9d7d5d4757f732bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05457a46c846553f9d7d5d4757f732bf columnFamilyName test 2023-07-21 15:15:53,722 INFO [StoreOpener-05457a46c846553f9d7d5d4757f732bf-1] regionserver.HStore(310): Store=05457a46c846553f9d7d5d4757f732bf/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:53,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:53,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:53,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 05457a46c846553f9d7d5d4757f732bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11616608640, jitterRate=0.08188098669052124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:53,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 05457a46c846553f9d7d5d4757f732bf: 2023-07-21 15:15:53,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf., pid=93, masterSystemTime=1689952553714 2023-07-21 15:15:53,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:53,735 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=05457a46c846553f9d7d5d4757f732bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,735 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689952553734"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952553734"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952553734"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952553734"}]},"ts":"1689952553734"} 2023-07-21 15:15:53,738 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-21 15:15:53,738 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 05457a46c846553f9d7d5d4757f732bf, server=jenkins-hbase17.apache.org,36355,1689952536596 in 177 msec 2023-07-21 15:15:53,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-21 15:15:53,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, ASSIGN in 337 msec 2023-07-21 15:15:53,740 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952553740"}]},"ts":"1689952553740"} 2023-07-21 15:15:53,741 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-21 15:15:53,745 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689952552964 type: FLUSH version: 2 ttl: 0 ) in 408 msec 2023-07-21 15:15:53,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-21 15:15:53,954 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 91 completed 2023-07-21 15:15:53,955 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-21 15:15:53,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCloneSnapshot 2023-07-21 15:15:53,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:53,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 15:15:53,959 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952553959"}]},"ts":"1689952553959"} 2023-07-21 15:15:53,960 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-21 15:15:53,961 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-21 15:15:53,962 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, UNASSIGN}] 2023-07-21 15:15:53,963 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, UNASSIGN 2023-07-21 15:15:53,964 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=c1a43e341958905d8067e3dc9ffd8c7d, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:53,964 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952553964"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952553964"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952553964"}]},"ts":"1689952553964"} 2023-07-21 15:15:53,966 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; CloseRegionProcedure c1a43e341958905d8067e3dc9ffd8c7d, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:54,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 15:15:54,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:54,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing c1a43e341958905d8067e3dc9ffd8c7d, disabling compactions & flushes 2023-07-21 15:15:54,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:54,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:54,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. after waiting 0 ms 2023-07-21 15:15:54,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:54,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-21 15:15:54,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d. 2023-07-21 15:15:54,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for c1a43e341958905d8067e3dc9ffd8c7d: 2023-07-21 15:15:54,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:54,129 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=c1a43e341958905d8067e3dc9ffd8c7d, regionState=CLOSED 2023-07-21 15:15:54,129 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689952554129"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952554129"}]},"ts":"1689952554129"} 2023-07-21 15:15:54,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-21 15:15:54,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; CloseRegionProcedure c1a43e341958905d8067e3dc9ffd8c7d, server=jenkins-hbase17.apache.org,36355,1689952536596 in 164 msec 2023-07-21 15:15:54,134 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-21 15:15:54,134 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=c1a43e341958905d8067e3dc9ffd8c7d, UNASSIGN in 170 msec 2023-07-21 15:15:54,136 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952554136"}]},"ts":"1689952554136"} 2023-07-21 15:15:54,137 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-21 15:15:54,139 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-21 15:15:54,145 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 184 msec 2023-07-21 15:15:54,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 15:15:54,262 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 94 completed 2023-07-21 15:15:54,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCloneSnapshot 2023-07-21 15:15:54,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:54,269 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=97, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:54,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-21 15:15:54,270 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=97, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:54,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:54,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:54,276 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:54,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 15:15:54,278 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/recovered.edits, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/test] 2023-07-21 15:15:54,282 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/recovered.edits/5.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d/recovered.edits/5.seqid 2023-07-21 15:15:54,284 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot/c1a43e341958905d8067e3dc9ffd8c7d 2023-07-21 15:15:54,284 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-21 15:15:54,286 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=97, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:54,288 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-21 15:15:54,290 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-21 15:15:54,291 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=97, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:54,291 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-21 15:15:54,291 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952554291"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:54,292 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:15:54,292 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c1a43e341958905d8067e3dc9ffd8c7d, NAME => 'Group_testCloneSnapshot,,1689952552322.c1a43e341958905d8067e3dc9ffd8c7d.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:15:54,292 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-21 15:15:54,293 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952554292"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:54,294 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-21 15:15:54,295 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=97, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:15:54,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 31 msec 2023-07-21 15:15:54,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 15:15:54,380 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 97 completed 2023-07-21 15:15:54,381 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-21 15:15:54,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_testCloneSnapshot_clone 2023-07-21 15:15:54,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-21 15:15:54,386 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952554386"}]},"ts":"1689952554386"} 2023-07-21 15:15:54,388 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-21 15:15:54,390 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-21 15:15:54,391 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, UNASSIGN}] 2023-07-21 15:15:54,392 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=99, ppid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, UNASSIGN 2023-07-21 15:15:54,393 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=99 updating hbase:meta row=05457a46c846553f9d7d5d4757f732bf, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:54,393 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689952554393"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952554393"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952554393"}]},"ts":"1689952554393"} 2023-07-21 15:15:54,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=99, state=RUNNABLE; CloseRegionProcedure 05457a46c846553f9d7d5d4757f732bf, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:54,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-21 15:15:54,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:54,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 05457a46c846553f9d7d5d4757f732bf, disabling compactions & flushes 2023-07-21 15:15:54,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:54,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:54,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. after waiting 0 ms 2023-07-21 15:15:54,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:54,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:54,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf. 2023-07-21 15:15:54,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 05457a46c846553f9d7d5d4757f732bf: 2023-07-21 15:15:54,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:54,556 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=99 updating hbase:meta row=05457a46c846553f9d7d5d4757f732bf, regionState=CLOSED 2023-07-21 15:15:54,556 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689952554556"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952554556"}]},"ts":"1689952554556"} 2023-07-21 15:15:54,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=99 2023-07-21 15:15:54,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=99, state=SUCCESS; CloseRegionProcedure 05457a46c846553f9d7d5d4757f732bf, server=jenkins-hbase17.apache.org,36355,1689952536596 in 162 msec 2023-07-21 15:15:54,560 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 15:15:54,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=05457a46c846553f9d7d5d4757f732bf, UNASSIGN in 168 msec 2023-07-21 15:15:54,561 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952554561"}]},"ts":"1689952554561"} 2023-07-21 15:15:54,563 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-21 15:15:54,564 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-21 15:15:54,566 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 184 msec 2023-07-21 15:15:54,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=98 2023-07-21 15:15:54,695 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 98 completed 2023-07-21 15:15:54,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_testCloneSnapshot_clone 2023-07-21 15:15:54,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,707 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=101, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-21 15:15:54,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:54,712 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=101, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:54,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-21 15:15:54,717 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:54,720 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/recovered.edits, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/test] 2023-07-21 15:15:54,726 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf/recovered.edits/4.seqid 2023-07-21 15:15:54,728 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/default/Group_testCloneSnapshot_clone/05457a46c846553f9d7d5d4757f732bf 2023-07-21 15:15:54,728 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-21 15:15:54,731 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=101, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,734 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-21 15:15:54,736 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-21 15:15:54,737 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=101, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,737 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-21 15:15:54,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952554737"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:54,739 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:15:54,739 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 05457a46c846553f9d7d5d4757f732bf, NAME => 'Group_testCloneSnapshot_clone,,1689952552322.05457a46c846553f9d7d5d4757f732bf.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:15:54,740 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-21 15:15:54,740 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952554740"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:54,750 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-21 15:15:54,752 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=101, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:15:54,753 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 56 msec 2023-07-21 15:15:54,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-21 15:15:54,817 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 101 completed 2023-07-21 15:15:54,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:54,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:54,824 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:54,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:54,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:54,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:54,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:54,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:54,836 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:54,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:54,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:54,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:54,843 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:54,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:54,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:54,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 566 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953754852, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:54,853 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:54,855 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:54,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,857 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:54,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:54,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:54,880 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=512 (was 509) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,39253,1689952540479' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,38527,1689952536414' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_264418683_17 at /127.0.0.1:43498 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-646185821_17 at /127.0.0.1:45414 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: (jenkins-hbase17.apache.org,43019,1689952533620)-proc-coordinator-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,36355,1689952536596' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-646185821_17 at /127.0.0.1:48010 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase17.apache.org,41299,1689952542769' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x38dd441b-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=798 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=848 (was 848), ProcessCount=186 (was 186), AvailableMemoryMB=1828 (was 1818) - AvailableMemoryMB LEAK? - 2023-07-21 15:15:54,880 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 15:15:54,902 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=512, OpenFileDescriptor=798, MaxFileDescriptor=60000, SystemLoadAverage=848, ProcessCount=186, AvailableMemoryMB=1827 2023-07-21 15:15:54,902 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 15:15:54,902 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:54,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:54,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:54,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:54,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:54,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:54,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:54,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:54,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:54,918 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:54,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:54,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:54,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:54,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:54,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:54,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:54,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 594 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953754928, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:54,929 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:54,930 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:54,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,932 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:54,932 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:54,932 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:54,933 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:54,933 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:54,933 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:54,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup appInfo 2023-07-21 15:15:54,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:54,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:54,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:54,940 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:54,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36355] to rsgroup appInfo 2023-07-21 15:15:54,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:54,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:54,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:54,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:54,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 15:15:54,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36355,1689952536596] are moved back to default 2023-07-21 15:15:54,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-21 15:15:54,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:54,954 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:54,955 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:54,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=appInfo 2023-07-21 15:15:54,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:54,963 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-21 15:15:54,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.ServerManager(636): Server jenkins-hbase17.apache.org,36355,1689952536596 added to draining server list. 2023-07-21 15:15:54,965 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:54,966 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase17.apache.org,36355,1689952536596 is already in the draining server list.Ignoring request to add it again. 2023-07-21 15:15:54,966 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase17.apache.org,36355,1689952536596] 2023-07-21 15:15:54,968 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$15(3014): Client=jenkins//136.243.18.41 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-21 15:15:54,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=102, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:54,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=102 2023-07-21 15:15:54,976 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:54,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 9 msec 2023-07-21 15:15:55,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=102 2023-07-21 15:15:55,075 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:55,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:55,086 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:55,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 103 2023-07-21 15:15:55,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 15:15:55,101 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=103, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=25 msec 2023-07-21 15:15:55,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-21 15:15:55,195 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 103 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-21 15:15:55,196 DEBUG [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 15:15:55,205 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:55,205 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-21 15:15:55,205 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase17.apache.org,36355,1689952536596] 2023-07-21 15:15:55,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:15:55,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:55,214 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=104, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:15:55,214 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 104 2023-07-21 15:15:55,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 15:15:55,216 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:55,217 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:55,217 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:55,218 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:55,219 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=104, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:15:55,221 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,221 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 empty. 2023-07-21 15:15:55,222 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,222 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-21 15:15:55,238 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-21 15:15:55,239 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0819aed2708759acf787bdb964a753c7, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:15:55,249 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:55,249 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing 0819aed2708759acf787bdb964a753c7, disabling compactions & flushes 2023-07-21 15:15:55,249 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,250 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,250 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. after waiting 0 ms 2023-07-21 15:15:55,250 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,250 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,250 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for 0819aed2708759acf787bdb964a753c7: 2023-07-21 15:15:55,253 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=104, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:15:55,254 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952555254"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952555254"}]},"ts":"1689952555254"} 2023-07-21 15:15:55,256 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:15:55,257 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=104, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:15:55,257 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952555257"}]},"ts":"1689952555257"} 2023-07-21 15:15:55,259 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-21 15:15:55,261 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, ASSIGN}] 2023-07-21 15:15:55,264 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, ASSIGN 2023-07-21 15:15:55,265 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36355,1689952536596; forceNewPlan=false, retain=false 2023-07-21 15:15:55,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 15:15:55,416 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=0819aed2708759acf787bdb964a753c7, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:55,416 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952555416"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952555416"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952555416"}]},"ts":"1689952555416"} 2023-07-21 15:15:55,418 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; OpenRegionProcedure 0819aed2708759acf787bdb964a753c7, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:55,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 15:15:55,549 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:15:55,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0819aed2708759acf787bdb964a753c7, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:15:55,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:15:55,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,578 INFO [StoreOpener-0819aed2708759acf787bdb964a753c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,580 DEBUG [StoreOpener-0819aed2708759acf787bdb964a753c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/f 2023-07-21 15:15:55,580 DEBUG [StoreOpener-0819aed2708759acf787bdb964a753c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/f 2023-07-21 15:15:55,580 INFO [StoreOpener-0819aed2708759acf787bdb964a753c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0819aed2708759acf787bdb964a753c7 columnFamilyName f 2023-07-21 15:15:55,581 INFO [StoreOpener-0819aed2708759acf787bdb964a753c7-1] regionserver.HStore(310): Store=0819aed2708759acf787bdb964a753c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:15:55,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:15:55,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 0819aed2708759acf787bdb964a753c7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10354289760, jitterRate=-0.03568162024021149}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:15:55,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 0819aed2708759acf787bdb964a753c7: 2023-07-21 15:15:55,591 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7., pid=106, masterSystemTime=1689952555570 2023-07-21 15:15:55,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,593 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=0819aed2708759acf787bdb964a753c7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:55,593 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952555593"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952555593"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952555593"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952555593"}]},"ts":"1689952555593"} 2023-07-21 15:15:55,595 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-21 15:15:55,596 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; OpenRegionProcedure 0819aed2708759acf787bdb964a753c7, server=jenkins-hbase17.apache.org,36355,1689952536596 in 176 msec 2023-07-21 15:15:55,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-21 15:15:55,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, ASSIGN in 334 msec 2023-07-21 15:15:55,597 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=104, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:15:55,598 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952555598"}]},"ts":"1689952555598"} 2023-07-21 15:15:55,599 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-21 15:15:55,601 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=104, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:15:55,602 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 392 msec 2023-07-21 15:15:55,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 15:15:55,820 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 104 completed 2023-07-21 15:15:55,821 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:55,826 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:55,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$11(2418): Client=jenkins//136.243.18.41 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:55,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:55,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 15:15:55,833 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952555833"}]},"ts":"1689952555833"} 2023-07-21 15:15:55,835 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-21 15:15:55,837 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-21 15:15:55,838 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, UNASSIGN}] 2023-07-21 15:15:55,839 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, UNASSIGN 2023-07-21 15:15:55,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=0819aed2708759acf787bdb964a753c7, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:55,840 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952555840"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952555840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952555840"}]},"ts":"1689952555840"} 2023-07-21 15:15:55,842 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 0819aed2708759acf787bdb964a753c7, server=jenkins-hbase17.apache.org,36355,1689952536596}] 2023-07-21 15:15:55,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 15:15:55,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:55,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 0819aed2708759acf787bdb964a753c7, disabling compactions & flushes 2023-07-21 15:15:55,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. after waiting 0 ms 2023-07-21 15:15:55,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:55,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:15:56,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7. 2023-07-21 15:15:56,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 0819aed2708759acf787bdb964a753c7: 2023-07-21 15:15:56,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:56,002 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=0819aed2708759acf787bdb964a753c7, regionState=CLOSED 2023-07-21 15:15:56,002 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689952556002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952556002"}]},"ts":"1689952556002"} 2023-07-21 15:15:56,005 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-21 15:15:56,006 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 0819aed2708759acf787bdb964a753c7, server=jenkins-hbase17.apache.org,36355,1689952536596 in 162 msec 2023-07-21 15:15:56,007 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-21 15:15:56,007 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=0819aed2708759acf787bdb964a753c7, UNASSIGN in 169 msec 2023-07-21 15:15:56,008 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952556008"}]},"ts":"1689952556008"} 2023-07-21 15:15:56,009 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-21 15:15:56,010 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-21 15:15:56,012 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 185 msec 2023-07-21 15:15:56,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-21 15:15:56,136 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 107 completed 2023-07-21 15:15:56,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$5(2228): Client=jenkins//136.243.18.41 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,139 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-21 15:15:56,139 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:56,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:56,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:15:56,143 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:56,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 15:15:56,145 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/f, FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/recovered.edits] 2023-07-21 15:15:56,152 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/recovered.edits/4.seqid to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7/recovered.edits/4.seqid 2023-07-21 15:15:56,153 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/0819aed2708759acf787bdb964a753c7 2023-07-21 15:15:56,153 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-21 15:15:56,155 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,157 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-21 15:15:56,159 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-21 15:15:56,160 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,160 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-21 15:15:56,160 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689952556160"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:56,162 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 15:15:56,162 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0819aed2708759acf787bdb964a753c7, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689952555208.0819aed2708759acf787bdb964a753c7.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 15:15:56,162 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-21 15:15:56,162 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689952556162"}]},"ts":"9223372036854775807"} 2023-07-21 15:15:56,164 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-21 15:15:56,166 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:15:56,167 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 29 msec 2023-07-21 15:15:56,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 15:15:56,246 INFO [Listener at localhost.localdomain/38883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 110 completed 2023-07-21 15:15:56,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.HMaster$17(3086): Client=jenkins//136.243.18.41 delete Group_ns 2023-07-21 15:15:56,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:56,256 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=111, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:56,258 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=111, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:56,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 15:15:56,260 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=111, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:56,260 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-21 15:15:56,261 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 15:15:56,261 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=111, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:56,263 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=111, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:15:56,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 10 msec 2023-07-21 15:15:56,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 15:15:56,360 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,361 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,362 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:56,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:56,362 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:56,363 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:56,363 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:56,364 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:56,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:56,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:15:56,371 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:56,372 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:56,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:56,372 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:56,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:36355] to rsgroup default 2023-07-21 15:15:56,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-21 15:15:56,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:56,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-21 15:15:56,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36355,1689952536596] are moved back to appInfo 2023-07-21 15:15:56,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-21 15:15:56,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:56,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup appInfo 2023-07-21 15:15:56,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:56,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:56,388 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:56,389 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:56,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:56,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:56,394 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:56,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:56,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:56,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 696 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953756399, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:56,400 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:56,402 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:56,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,403 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:56,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:56,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:56,426 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=513 (was 512) Potentially hanging thread: hconnection-0x46251d71-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_264418683_17 at /127.0.0.1:43498 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x46251d71-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=765 (was 798), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=780 (was 848), ProcessCount=186 (was 186), AvailableMemoryMB=1822 (was 1827) 2023-07-21 15:15:56,427 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-21 15:15:56,461 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=513, OpenFileDescriptor=765, MaxFileDescriptor=60000, SystemLoadAverage=780, ProcessCount=186, AvailableMemoryMB=1821 2023-07-21 15:15:56,461 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-21 15:15:56,461 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-21 15:15:56,470 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,472 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:56,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:56,472 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:56,472 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:56,472 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:56,473 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:56,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:56,477 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:56,479 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:56,480 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:56,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:56,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:56,484 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:56,487 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,487 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:56,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:56,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 724 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953756489, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:56,490 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:56,492 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:56,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,494 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:56,494 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:56,494 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:56,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:56,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:56,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:56,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:56,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:56,501 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:56,501 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:56,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:56,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:56,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:56,510 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:56,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:56,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:56,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:56,515 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:56,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:56,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:56,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 754 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953756520, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:56,521 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:56,522 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:56,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,524 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:56,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:56,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:56,543 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=514 (was 513) Potentially hanging thread: hconnection-0x46251d71-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=765 (was 765), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=780 (was 780), ProcessCount=186 (was 186), AvailableMemoryMB=1813 (was 1821) 2023-07-21 15:15:56,543 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-21 15:15:56,559 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=514, OpenFileDescriptor=765, MaxFileDescriptor=60000, SystemLoadAverage=780, ProcessCount=186, AvailableMemoryMB=1809 2023-07-21 15:15:56,559 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-21 15:15:56,560 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-21 15:15:56,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:15:56,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:15:56,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:15:56,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:15:56,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:15:56,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:15:56,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:15:56,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:15:56,574 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:15:56,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:15:56,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:15:56,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:15:56,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:15:56,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:15:56,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43019] to rsgroup master 2023-07-21 15:15:56,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:15:56,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] ipc.CallRunner(144): callId: 782 service: MasterService methodName: ExecMasterService size: 120 connection: 136.243.18.41:48124 deadline: 1689953756586, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. 2023-07-21 15:15:56,587 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43019 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:15:56,588 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:15:56,589 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:15:56,589 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:15:56,590 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36355, jenkins-hbase17.apache.org:38527, jenkins-hbase17.apache.org:39253, jenkins-hbase17.apache.org:41299], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:15:56,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:15:56,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43019] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:15:56,591 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-21 15:15:56,591 INFO [Listener at localhost.localdomain/38883] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 15:15:56,591 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a8f3be3 to 127.0.0.1:62052 2023-07-21 15:15:56,591 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,591 DEBUG [Listener at localhost.localdomain/38883] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 15:15:56,591 DEBUG [Listener at localhost.localdomain/38883] util.JVMClusterUtil(257): Found active master hash=456243625, stopped=false 2023-07-21 15:15:56,591 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:15:56,592 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:15:56,592 INFO [Listener at localhost.localdomain/38883] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:56,593 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:56,593 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:56,593 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:56,593 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:56,593 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:15:56,593 INFO [Listener at localhost.localdomain/38883] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 15:15:56,593 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:56,593 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:56,594 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:56,594 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:56,594 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5734e33a to 127.0.0.1:62052 2023-07-21 15:15:56,594 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:56,594 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,594 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:56,594 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,38527,1689952536414' ***** 2023-07-21 15:15:56,594 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:15:56,594 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,36355,1689952536596' ***** 2023-07-21 15:15:56,594 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:15:56,594 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:15:56,594 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:15:56,594 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,39253,1689952540479' ***** 2023-07-21 15:15:56,595 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:15:56,597 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,41299,1689952542769' ***** 2023-07-21 15:15:56,597 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:15:56,598 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:15:56,601 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:15:56,606 INFO [RS:2;jenkins-hbase17:36355] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4737c079{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:56,606 INFO [RS:3;jenkins-hbase17:39253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@79c5b1d7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:56,606 INFO [RS:1;jenkins-hbase17:38527] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@47469f82{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:56,606 INFO [RS:4;jenkins-hbase17:41299] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@13d1a755{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:56,607 INFO [RS:2;jenkins-hbase17:36355] server.AbstractConnector(383): Stopped ServerConnector@554c4191{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:56,607 INFO [RS:1;jenkins-hbase17:38527] server.AbstractConnector(383): Stopped ServerConnector@4876c5d5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:56,607 INFO [RS:3;jenkins-hbase17:39253] server.AbstractConnector(383): Stopped ServerConnector@26a4e210{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:56,607 INFO [RS:1;jenkins-hbase17:38527] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:15:56,607 INFO [RS:2;jenkins-hbase17:36355] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:15:56,607 INFO [RS:4;jenkins-hbase17:41299] server.AbstractConnector(383): Stopped ServerConnector@40ac24e8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:56,608 INFO [RS:1;jenkins-hbase17:38527] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@66f1a447{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:15:56,607 INFO [RS:3;jenkins-hbase17:39253] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:15:56,609 INFO [RS:2;jenkins-hbase17:36355] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33c3bc88{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:15:56,609 INFO [RS:4;jenkins-hbase17:41299] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:15:56,610 INFO [RS:3;jenkins-hbase17:39253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@77e496fe{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:15:56,612 INFO [RS:4;jenkins-hbase17:41299] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7955a1e3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:15:56,610 INFO [RS:1;jenkins-hbase17:38527] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1dd83568{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:15:56,613 INFO [RS:4;jenkins-hbase17:41299] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@727b6878{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:15:56,612 INFO [RS:3;jenkins-hbase17:39253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5fc83f2f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:15:56,614 INFO [RS:1;jenkins-hbase17:38527] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:15:56,614 INFO [RS:1;jenkins-hbase17:38527] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:15:56,611 INFO [RS:2;jenkins-hbase17:36355] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f3c4bd9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:15:56,614 INFO [RS:1;jenkins-hbase17:38527] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:15:56,614 INFO [RS:4;jenkins-hbase17:41299] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:15:56,614 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:56,614 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:15:56,614 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:15:56,614 DEBUG [RS:1;jenkins-hbase17:38527] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x01825c0b to 127.0.0.1:62052 2023-07-21 15:15:56,614 INFO [RS:4;jenkins-hbase17:41299] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:15:56,615 INFO [RS:2;jenkins-hbase17:36355] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:15:56,615 INFO [RS:2;jenkins-hbase17:36355] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:15:56,615 INFO [RS:2;jenkins-hbase17:36355] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:15:56,615 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:15:56,615 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:56,615 DEBUG [RS:2;jenkins-hbase17:36355] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4786024c to 127.0.0.1:62052 2023-07-21 15:15:56,615 DEBUG [RS:2;jenkins-hbase17:36355] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,615 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36355,1689952536596; all regions closed. 2023-07-21 15:15:56,614 DEBUG [RS:1;jenkins-hbase17:38527] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,614 INFO [RS:3;jenkins-hbase17:39253] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:15:56,615 INFO [RS:1;jenkins-hbase17:38527] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:15:56,616 INFO [RS:1;jenkins-hbase17:38527] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:15:56,616 INFO [RS:1;jenkins-hbase17:38527] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:15:56,615 INFO [RS:4;jenkins-hbase17:41299] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:15:56,616 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 15:15:56,616 INFO [RS:3;jenkins-hbase17:39253] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:15:56,616 INFO [RS:3;jenkins-hbase17:39253] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:15:56,616 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(3305): Received CLOSE for 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:56,616 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:15:56,616 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:56,616 DEBUG [RS:3;jenkins-hbase17:39253] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x16726dfc to 127.0.0.1:62052 2023-07-21 15:15:56,616 DEBUG [RS:3;jenkins-hbase17:39253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,617 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 15:15:56,617 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1478): Online Regions={7697a92683cfac49519e4a4111355983=hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.} 2023-07-21 15:15:56,616 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(3305): Received CLOSE for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:56,622 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:56,622 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 15:15:56,622 DEBUG [RS:4;jenkins-hbase17:41299] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x63998d6f to 127.0.0.1:62052 2023-07-21 15:15:56,622 DEBUG [RS:4;jenkins-hbase17:41299] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,622 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 15:15:56,622 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1478): Online Regions={603dc738ccec189e3bde34ff84c46389=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.} 2023-07-21 15:15:56,622 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 15:15:56,626 DEBUG [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1504): Waiting on 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:15:56,625 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:15:56,626 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:15:56,626 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:15:56,626 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:15:56,626 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:15:56,626 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=48.29 KB heapSize=77.86 KB 2023-07-21 15:15:56,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7697a92683cfac49519e4a4111355983, disabling compactions & flushes 2023-07-21 15:15:56,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. after waiting 0 ms 2023-07-21 15:15:56,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:56,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 603dc738ccec189e3bde34ff84c46389, disabling compactions & flushes 2023-07-21 15:15:56,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:56,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:56,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. after waiting 0 ms 2023-07-21 15:15:56,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:56,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 603dc738ccec189e3bde34ff84c46389 1/1 column families, dataSize=9.72 KB heapSize=15.93 KB 2023-07-21 15:15:56,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 7697a92683cfac49519e4a4111355983 1/1 column families, dataSize=287 B heapSize=920 B 2023-07-21 15:15:56,626 DEBUG [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1504): Waiting on 7697a92683cfac49519e4a4111355983 2023-07-21 15:15:56,626 DEBUG [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 15:15:56,634 DEBUG [RS:2;jenkins-hbase17:36355] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:15:56,634 INFO [RS:2;jenkins-hbase17:36355] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C36355%2C1689952536596:(num 1689952538616) 2023-07-21 15:15:56,634 DEBUG [RS:2;jenkins-hbase17:36355] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,634 INFO [RS:2;jenkins-hbase17:36355] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,640 INFO [RS:2;jenkins-hbase17:36355] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:15:56,648 INFO [RS:2;jenkins-hbase17:36355] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:15:56,648 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:15:56,648 INFO [RS:2;jenkins-hbase17:36355] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:15:56,649 INFO [RS:2;jenkins-hbase17:36355] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:15:56,650 INFO [RS:2;jenkins-hbase17:36355] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36355 2023-07-21 15:15:56,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.72 KB at sequenceid=73 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:15:56,683 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=42.65 KB at sequenceid=142 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:15:56,687 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,687 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,688 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,688 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:15:56,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=287 B at sequenceid=17 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/.tmp/info/15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:15:56,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:15:56,692 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:15:56,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:15:56,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/.tmp/info/15fdaef33b9647fab27918fa7b51727e as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:15:56,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:15:56,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b, entries=14, sequenceid=73, filesize=5.5 K 2023-07-21 15:15:56,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.72 KB/9952, heapSize ~15.91 KB/16296, currentSize=0 B/0 for 603dc738ccec189e3bde34ff84c46389 in 72ms, sequenceid=73, compaction requested=false 2023-07-21 15:15:56,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:15:56,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:15:56,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/15fdaef33b9647fab27918fa7b51727e, entries=3, sequenceid=17, filesize=5.0 K 2023-07-21 15:15:56,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~287 B/287, heapSize ~904 B/904, currentSize=0 B/0 for 7697a92683cfac49519e4a4111355983 in 82ms, sequenceid=17, compaction requested=false 2023-07-21 15:15:56,710 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=142 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/rep_barrier/c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:15:56,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/recovered.edits/76.seqid, newMaxSeqId=76, maxSeqId=34 2023-07-21 15:15:56,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:15:56,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:56,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:15:56,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:15:56,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/recovered.edits/20.seqid, newMaxSeqId=20, maxSeqId=9 2023-07-21 15:15:56,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:56,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:15:56,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:15:56,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36355,1689952536596 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,728 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,729 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36355,1689952536596] 2023-07-21 15:15:56,729 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36355,1689952536596; numProcessing=1 2023-07-21 15:15:56,729 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36355,1689952536596 already deleted, retry=false 2023-07-21 15:15:56,729 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36355,1689952536596 expired; onlineServers=3 2023-07-21 15:15:56,731 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.91 KB at sequenceid=142 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/table/916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:15:56,737 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:15:56,738 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/61fcafcc9c244e3eb1f1f966564d855c as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:15:56,745 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:15:56,745 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c, entries=62, sequenceid=142, filesize=11.7 K 2023-07-21 15:15:56,746 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/rep_barrier/c7e6e1836f7f4098a404b796a61af07f as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier/c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:15:56,753 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:15:56,753 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier/c7e6e1836f7f4098a404b796a61af07f, entries=16, sequenceid=142, filesize=6.7 K 2023-07-21 15:15:56,754 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/table/916df231b8fb48db908a7ebc1b240c3d as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:15:56,762 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:15:56,762 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/916df231b8fb48db908a7ebc1b240c3d, entries=27, sequenceid=142, filesize=7.1 K 2023-07-21 15:15:56,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~48.29 KB/49448, heapSize ~77.81 KB/79680, currentSize=0 B/0 for 1588230740 in 137ms, sequenceid=142, compaction requested=false 2023-07-21 15:15:56,773 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/recovered.edits/145.seqid, newMaxSeqId=145, maxSeqId=1 2023-07-21 15:15:56,774 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:15:56,775 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:15:56,775 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:15:56,775 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 15:15:56,826 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41299,1689952542769; all regions closed. 2023-07-21 15:15:56,830 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38527,1689952536414; all regions closed. 2023-07-21 15:15:56,830 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,39253,1689952540479; all regions closed. 2023-07-21 15:15:56,857 DEBUG [RS:3;jenkins-hbase17:39253] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:15:56,857 INFO [RS:3;jenkins-hbase17:39253] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C39253%2C1689952540479:(num 1689952541213) 2023-07-21 15:15:56,857 DEBUG [RS:3;jenkins-hbase17:39253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,857 INFO [RS:3;jenkins-hbase17:39253] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,858 INFO [RS:3;jenkins-hbase17:39253] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:15:56,858 INFO [RS:3;jenkins-hbase17:39253] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:15:56,858 INFO [RS:3;jenkins-hbase17:39253] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:15:56,858 INFO [RS:3;jenkins-hbase17:39253] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:15:56,858 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:15:56,859 INFO [RS:3;jenkins-hbase17:39253] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:39253 2023-07-21 15:15:56,861 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:56,861 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:56,861 DEBUG [RS:4;jenkins-hbase17:41299] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:15:56,861 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,39253,1689952540479 2023-07-21 15:15:56,861 INFO [RS:4;jenkins-hbase17:41299] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C41299%2C1689952542769:(num 1689952543343) 2023-07-21 15:15:56,861 DEBUG [RS:4;jenkins-hbase17:41299] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,861 INFO [RS:4;jenkins-hbase17:41299] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,861 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,862 INFO [RS:4;jenkins-hbase17:41299] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:15:56,862 INFO [RS:4;jenkins-hbase17:41299] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:15:56,862 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:15:56,862 INFO [RS:4;jenkins-hbase17:41299] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:15:56,862 INFO [RS:4;jenkins-hbase17:41299] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:15:56,862 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,39253,1689952540479] 2023-07-21 15:15:56,862 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,39253,1689952540479; numProcessing=2 2023-07-21 15:15:56,863 INFO [RS:4;jenkins-hbase17:41299] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41299 2023-07-21 15:15:56,864 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,39253,1689952540479 already deleted, retry=false 2023-07-21 15:15:56,864 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,39253,1689952540479 expired; onlineServers=2 2023-07-21 15:15:56,865 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:56,865 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41299,1689952542769 2023-07-21 15:15:56,866 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,866 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,41299,1689952542769] 2023-07-21 15:15:56,866 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41299,1689952542769; numProcessing=3 2023-07-21 15:15:56,867 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41299,1689952542769 already deleted, retry=false 2023-07-21 15:15:56,867 DEBUG [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:15:56,867 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,41299,1689952542769 expired; onlineServers=1 2023-07-21 15:15:56,867 INFO [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C38527%2C1689952536414.meta:.meta(num 1689952538803) 2023-07-21 15:15:56,877 DEBUG [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:15:56,878 INFO [RS:1;jenkins-hbase17:38527] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C38527%2C1689952536414:(num 1689952538620) 2023-07-21 15:15:56,878 DEBUG [RS:1;jenkins-hbase17:38527] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:56,878 INFO [RS:1;jenkins-hbase17:38527] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:15:56,878 INFO [RS:1;jenkins-hbase17:38527] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:15:56,878 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:15:56,879 INFO [RS:1;jenkins-hbase17:38527] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38527 2023-07-21 15:15:56,881 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38527,1689952536414 2023-07-21 15:15:56,884 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:15:56,983 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:56,983 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:38527-0x1018872b3790002, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:56,983 INFO [RS:1;jenkins-hbase17:38527] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38527,1689952536414; zookeeper connection closed. 2023-07-21 15:15:56,984 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@9cfde0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@9cfde0 2023-07-21 15:15:56,984 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38527,1689952536414] 2023-07-21 15:15:56,984 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38527,1689952536414; numProcessing=4 2023-07-21 15:15:56,985 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38527,1689952536414 already deleted, retry=false 2023-07-21 15:15:56,985 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38527,1689952536414 expired; onlineServers=0 2023-07-21 15:15:56,985 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43019,1689952533620' ***** 2023-07-21 15:15:56,985 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 15:15:56,985 DEBUG [M:0;jenkins-hbase17:43019] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a0cb392, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:15:56,985 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:15:56,990 INFO [M:0;jenkins-hbase17:43019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@276e9a5b{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:15:56,992 INFO [M:0;jenkins-hbase17:43019] server.AbstractConnector(383): Stopped ServerConnector@4c452b0b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:56,992 INFO [M:0;jenkins-hbase17:43019] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:15:56,995 INFO [M:0;jenkins-hbase17:43019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3dbf6867{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:15:56,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 15:15:56,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:15:56,997 INFO [M:0;jenkins-hbase17:43019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@11c58ab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:15:57,000 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43019,1689952533620 2023-07-21 15:15:57,000 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43019,1689952533620; all regions closed. 2023-07-21 15:15:57,000 DEBUG [M:0;jenkins-hbase17:43019] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:15:57,000 INFO [M:0;jenkins-hbase17:43019] master.HMaster(1491): Stopping master jetty server 2023-07-21 15:15:57,005 INFO [M:0;jenkins-hbase17:43019] server.AbstractConnector(383): Stopped ServerConnector@50f45965{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:15:57,006 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:57,010 DEBUG [M:0;jenkins-hbase17:43019] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 15:15:57,010 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 15:15:57,010 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952538004] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952538004,5,FailOnTimeoutGroup] 2023-07-21 15:15:57,010 DEBUG [M:0;jenkins-hbase17:43019] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 15:15:57,010 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952538008] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952538008,5,FailOnTimeoutGroup] 2023-07-21 15:15:57,010 INFO [M:0;jenkins-hbase17:43019] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 15:15:57,010 INFO [M:0;jenkins-hbase17:43019] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 15:15:57,010 INFO [M:0;jenkins-hbase17:43019] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 15:15:57,011 DEBUG [M:0;jenkins-hbase17:43019] master.HMaster(1512): Stopping service threads 2023-07-21 15:15:57,011 INFO [M:0;jenkins-hbase17:43019] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 15:15:57,011 ERROR [M:0;jenkins-hbase17:43019] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 15:15:57,012 INFO [M:0;jenkins-hbase17:43019] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 15:15:57,012 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 15:15:57,013 DEBUG [M:0;jenkins-hbase17:43019] zookeeper.ZKUtil(398): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 15:15:57,013 WARN [M:0;jenkins-hbase17:43019] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 15:15:57,013 INFO [M:0;jenkins-hbase17:43019] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 15:15:57,014 INFO [M:0;jenkins-hbase17:43019] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 15:15:57,014 DEBUG [M:0;jenkins-hbase17:43019] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:15:57,014 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:57,014 DEBUG [M:0;jenkins-hbase17:43019] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:57,014 DEBUG [M:0;jenkins-hbase17:43019] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:15:57,014 DEBUG [M:0;jenkins-hbase17:43019] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:57,014 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=374.06 KB heapSize=445.74 KB 2023-07-21 15:15:57,036 INFO [M:0;jenkins-hbase17:43019] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=374.06 KB at sequenceid=820 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/31d5965468314798babf1c1ceecb489d 2023-07-21 15:15:57,043 DEBUG [M:0;jenkins-hbase17:43019] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/31d5965468314798babf1c1ceecb489d as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/31d5965468314798babf1c1ceecb489d 2023-07-21 15:15:57,050 INFO [M:0;jenkins-hbase17:43019] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/31d5965468314798babf1c1ceecb489d, entries=111, sequenceid=820, filesize=25.7 K 2023-07-21 15:15:57,051 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegion(2948): Finished flush of dataSize ~374.06 KB/383034, heapSize ~445.73 KB/456424, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 37ms, sequenceid=820, compaction requested=false 2023-07-21 15:15:57,053 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:15:57,053 DEBUG [M:0;jenkins-hbase17:43019] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:15:57,058 INFO [M:0;jenkins-hbase17:43019] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 15:15:57,059 INFO [M:0;jenkins-hbase17:43019] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43019 2023-07-21 15:15:57,059 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:15:57,060 DEBUG [M:0;jenkins-hbase17:43019] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,43019,1689952533620 already deleted, retry=false 2023-07-21 15:15:57,194 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,194 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43019-0x1018872b3790000, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,194 INFO [M:0;jenkins-hbase17:43019] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43019,1689952533620; zookeeper connection closed. 2023-07-21 15:15:57,295 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,295 INFO [RS:4;jenkins-hbase17:41299] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41299,1689952542769; zookeeper connection closed. 2023-07-21 15:15:57,295 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41299-0x1018872b379000d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,295 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@64e33c6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@64e33c6 2023-07-21 15:15:57,395 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,395 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:39253-0x1018872b379000b, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,395 INFO [RS:3;jenkins-hbase17:39253] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,39253,1689952540479; zookeeper connection closed. 2023-07-21 15:15:57,395 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f463e52] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1f463e52 2023-07-21 15:15:57,495 INFO [RS:2;jenkins-hbase17:36355] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36355,1689952536596; zookeeper connection closed. 2023-07-21 15:15:57,495 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,496 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36355-0x1018872b3790003, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:15:57,496 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@bb1b6f1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@bb1b6f1 2023-07-21 15:15:57,496 INFO [Listener at localhost.localdomain/38883] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-21 15:15:57,497 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-21 15:15:59,498 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 15:15:59,498 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 15:15:59,498 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 15:15:59,498 DEBUG [Listener at localhost.localdomain/38883] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 15:15:59,499 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:59,499 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,499 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,499 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:59,499 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,499 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:59,500 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:59,500 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43113 2023-07-21 15:15:59,501 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,502 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,503 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43113 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:59,506 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:431130x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:59,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43113-0x1018872b3790010 connected 2023-07-21 15:15:59,509 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:59,510 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:59,510 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:59,511 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43113 2023-07-21 15:15:59,511 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43113 2023-07-21 15:15:59,511 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43113 2023-07-21 15:15:59,511 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43113 2023-07-21 15:15:59,511 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43113 2023-07-21 15:15:59,514 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:59,514 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:59,514 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:59,514 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 15:15:59,514 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:59,514 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:59,515 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:59,516 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 41331 2023-07-21 15:15:59,516 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:59,518 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,518 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6473921d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:59,518 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,518 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76613683{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:59,630 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:59,631 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:59,632 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:59,633 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:15:59,634 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,636 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3bd9010b{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-41331-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8529139753882663703/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:15:59,638 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@34017405{HTTP/1.1, (http/1.1)}{0.0.0.0:41331} 2023-07-21 15:15:59,638 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @32450ms 2023-07-21 15:15:59,639 INFO [Listener at localhost.localdomain/38883] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3, hbase.cluster.distributed=false 2023-07-21 15:15:59,642 DEBUG [pool-346-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-21 15:15:59,657 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:59,657 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,657 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,657 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:59,657 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,658 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:59,658 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:59,658 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33615 2023-07-21 15:15:59,659 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:59,660 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:59,660 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,661 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,663 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33615 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:59,666 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:336150x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:59,667 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:336150x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:59,668 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:59,667 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33615-0x1018872b3790011 connected 2023-07-21 15:15:59,669 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:59,669 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33615 2023-07-21 15:15:59,669 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33615 2023-07-21 15:15:59,669 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33615 2023-07-21 15:15:59,670 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33615 2023-07-21 15:15:59,670 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33615 2023-07-21 15:15:59,672 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:59,672 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:59,672 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:59,673 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:59,673 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:59,673 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:59,673 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:59,674 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 42195 2023-07-21 15:15:59,674 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:59,680 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,680 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@99d5e88{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:59,681 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,681 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e7a882b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:59,773 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:59,774 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:59,774 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:59,775 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:15:59,775 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,776 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7c78d807{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-42195-hbase-server-2_4_18-SNAPSHOT_jar-_-any-699646619706811862/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:59,777 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@1712c33c{HTTP/1.1, (http/1.1)}{0.0.0.0:42195} 2023-07-21 15:15:59,777 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @32589ms 2023-07-21 15:15:59,787 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:59,787 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,787 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,787 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:59,787 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,788 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:59,788 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:59,788 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:33915 2023-07-21 15:15:59,789 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:59,790 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:59,790 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,791 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,792 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33915 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:59,795 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:339150x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:59,797 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:339150x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:59,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33915-0x1018872b3790012 connected 2023-07-21 15:15:59,798 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:59,799 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:59,800 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33915 2023-07-21 15:15:59,800 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33915 2023-07-21 15:15:59,800 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33915 2023-07-21 15:15:59,801 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33915 2023-07-21 15:15:59,801 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33915 2023-07-21 15:15:59,802 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:59,802 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:59,803 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:59,803 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:59,803 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:59,803 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:59,803 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:59,804 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 36379 2023-07-21 15:15:59,804 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:59,805 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,805 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@504be1f3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:59,805 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,805 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@647710e0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:15:59,911 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:15:59,912 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:15:59,913 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:15:59,913 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:15:59,921 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,922 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@bff2ab{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-36379-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8635143639643849586/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:15:59,923 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@4f2d6bc1{HTTP/1.1, (http/1.1)}{0.0.0.0:36379} 2023-07-21 15:15:59,924 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @32735ms 2023-07-21 15:15:59,937 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:15:59,938 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,938 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,938 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:15:59,938 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:15:59,938 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:15:59,938 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:15:59,939 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:44429 2023-07-21 15:15:59,939 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:15:59,940 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:15:59,941 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,942 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:15:59,942 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44429 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:15:59,947 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:444290x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:15:59,948 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:444290x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:15:59,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44429-0x1018872b3790013 connected 2023-07-21 15:15:59,949 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:15:59,950 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:15:59,950 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44429 2023-07-21 15:15:59,951 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44429 2023-07-21 15:15:59,952 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44429 2023-07-21 15:15:59,954 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44429 2023-07-21 15:15:59,955 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44429 2023-07-21 15:15:59,957 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:15:59,957 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:15:59,957 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:15:59,958 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:15:59,958 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:15:59,958 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:15:59,958 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:15:59,959 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 42687 2023-07-21 15:15:59,959 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:15:59,962 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,962 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d618ce1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:15:59,963 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:15:59,963 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7cc0a20a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:00,056 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:00,057 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:00,057 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:00,057 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:16:00,058 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:00,059 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a4c2592{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-42687-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7017887374857365734/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:00,061 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@4aea5bd2{HTTP/1.1, (http/1.1)}{0.0.0.0:42687} 2023-07-21 15:16:00,061 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @32872ms 2023-07-21 15:16:00,062 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:00,065 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3c0c3de6{HTTP/1.1, (http/1.1)}{0.0.0.0:44935} 2023-07-21 15:16:00,065 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @32876ms 2023-07-21 15:16:00,065 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,066 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:16:00,067 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,067 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:00,067 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:00,067 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:00,067 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:00,069 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:00,071 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:00,072 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:00,072 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,43113,1689952559498 from backup master directory 2023-07-21 15:16:00,078 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,078 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:00,078 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:16:00,078 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,129 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:00,203 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3cf2cb6a to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:00,215 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@daa2753, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:00,216 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:00,217 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 15:16:00,223 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:00,237 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620-dead as it is dead 2023-07-21 15:16:00,239 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620-dead/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 2023-07-21 15:16:00,244 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620-dead/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 after 4ms 2023-07-21 15:16:00,245 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620-dead/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 2023-07-21 15:16:00,245 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43019,1689952533620-dead 2023-07-21 15:16:00,246 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,250 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43113%2C1689952559498, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/oldWALs, maxLogs=10 2023-07-21 15:16:00,278 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:00,293 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:00,293 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:00,316 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 2023-07-21 15:16:00,321 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK]] 2023-07-21 15:16:00,321 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:00,321 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:00,321 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:00,321 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:00,331 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:00,332 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 15:16:00,333 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 15:16:00,347 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/31d5965468314798babf1c1ceecb489d 2023-07-21 15:16:00,348 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:00,348 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-21 15:16:00,349 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 2023-07-21 15:16:00,391 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 967, firstSequenceIdInLog=3, maxSequenceIdInLog=822, path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 2023-07-21 15:16:00,393 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43019%2C1689952533620.1689952537211 2023-07-21 15:16:00,397 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:00,401 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/822.seqid, newMaxSeqId=822, maxSeqId=1 2023-07-21 15:16:00,402 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=823; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10846481280, jitterRate=0.010157287120819092}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:00,402 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:16:00,402 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 15:16:00,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 15:16:00,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 15:16:00,404 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 15:16:00,405 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 15:16:00,414 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-21 15:16:00,415 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:16:00,415 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-21 15:16:00,416 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-21 15:16:00,416 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:16:00,416 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE 2023-07-21 15:16:00,416 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,33925,1689952536167, splitWal=true, meta=false 2023-07-21 15:16:00,417 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-21 15:16:00,417 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:16:00,417 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:16:00,418 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:16:00,418 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:16:00,418 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:16:00,418 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:16:00,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE 2023-07-21 15:16:00,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=70, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 15:16:00,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:16:00,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:16:00,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:16:00,419 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=78, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:16:00,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:16:00,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:16:00,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:16:00,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:16:00,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=89, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 15:16:00,420 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=90, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 15:16:00,421 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689952552964 type: FLUSH version: 2 ttl: 0 ) 2023-07-21 15:16:00,421 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:16:00,421 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:16:00,421 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:16:00,422 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:16:00,422 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=102, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-21 15:16:00,423 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=103, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:00,423 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:00,423 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:00,423 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:00,423 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=111, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:16:00,424 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 18 msec 2023-07-21 15:16:00,424 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 15:16:00,425 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-21 15:16:00,425 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase17.apache.org,38527,1689952536414, table=hbase:meta, region=1588230740 2023-07-21 15:16:00,427 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-21 15:16:00,428 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41299,1689952542769 already deleted, retry=false 2023-07-21 15:16:00,428 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,41299,1689952542769 on jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,429 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,41299,1689952542769, splitWal=true, meta=false 2023-07-21 15:16:00,430 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=112 for jenkins-hbase17.apache.org,41299,1689952542769 (carryingMeta=false) jenkins-hbase17.apache.org,41299,1689952542769/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6e733b86[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:00,430 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,39253,1689952540479 already deleted, retry=false 2023-07-21 15:16:00,430 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,39253,1689952540479 on jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,431 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,39253,1689952540479, splitWal=true, meta=false 2023-07-21 15:16:00,431 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=113 for jenkins-hbase17.apache.org,39253,1689952540479 (carryingMeta=false) jenkins-hbase17.apache.org,39253,1689952540479/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6dd1f983[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:00,432 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36355,1689952536596 already deleted, retry=false 2023-07-21 15:16:00,432 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,36355,1689952536596 on jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,433 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,36355,1689952536596, splitWal=true, meta=false 2023-07-21 15:16:00,433 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=114 for jenkins-hbase17.apache.org,36355,1689952536596 (carryingMeta=false) jenkins-hbase17.apache.org,36355,1689952536596/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4eb20f59[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:00,434 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38527,1689952536414 already deleted, retry=false 2023-07-21 15:16:00,434 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,38527,1689952536414 on jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,435 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=115, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,38527,1689952536414, splitWal=true, meta=true 2023-07-21 15:16:00,435 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=115 for jenkins-hbase17.apache.org,38527,1689952536414 (carryingMeta=true) jenkins-hbase17.apache.org,38527,1689952536414/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5c5e5c53[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:00,436 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-21 15:16:00,436 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 15:16:00,436 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 15:16:00,437 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 15:16:00,437 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 15:16:00,438 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 15:16:00,440 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:00,440 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:00,440 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:00,443 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:00,443 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:00,444 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,43113,1689952559498, sessionid=0x1018872b3790010, setting cluster-up flag (Was=false) 2023-07-21 15:16:00,447 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 15:16:00,448 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,450 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 15:16:00,451 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:00,453 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 15:16:00,453 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 15:16:00,455 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-21 15:16:00,455 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:00,455 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 15:16:00,456 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 15:16:00,456 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 15:16:00,458 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:00,459 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:38527 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:38527 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:00,461 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:38527 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:38527 2023-07-21 15:16:00,463 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:00,463 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:00,463 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:00,465 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:00,464 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:00,466 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:00,467 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:00,467 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:00,467 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:00,467 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:00,467 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:00,468 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:00,473 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:16:00,473 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:16:00,473 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:16:00,473 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:16:00,474 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:00,474 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:00,474 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:00,474 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:00,474 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:00,477 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:00,477 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ReadOnlyZKClient(139): Connect 0x7c9b8472 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:00,474 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:00,477 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 15:16:00,477 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,478 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ReadOnlyZKClient(139): Connect 0x40ef2bd5 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:00,478 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ReadOnlyZKClient(139): Connect 0x5fb809ae to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:00,478 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:00,480 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,487 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689952590487 2023-07-21 15:16:00,487 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 15:16:00,488 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 15:16:00,488 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 15:16:00,488 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 15:16:00,488 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 15:16:00,488 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 15:16:00,494 DEBUG [RS:1;jenkins-hbase17:33915] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@725c7591, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:00,494 DEBUG [RS:1;jenkins-hbase17:33915] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7663ef02, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:00,494 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,496 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 15:16:00,496 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 15:16:00,497 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 15:16:00,502 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38527,1689952536414; numProcessing=1 2023-07-21 15:16:00,502 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=115, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,38527,1689952536414, splitWal=true, meta=true 2023-07-21 15:16:00,503 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 15:16:00,503 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 15:16:00,503 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952560503,5,FailOnTimeoutGroup] 2023-07-21 15:16:00,502 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase17.apache.org,39253,1689952540479; numProcessing=2 2023-07-21 15:16:00,506 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=113, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,39253,1689952540479, splitWal=true, meta=false 2023-07-21 15:16:00,508 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:33915 2023-07-21 15:16:00,508 INFO [RS:1;jenkins-hbase17:33915] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:00,508 INFO [RS:1;jenkins-hbase17:33915] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:00,508 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:00,508 DEBUG [PEWorker-5] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36355,1689952536596; numProcessing=3 2023-07-21 15:16:00,508 INFO [PEWorker-5] procedure.ServerCrashProcedure(161): Start pid=114, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,36355,1689952536596, splitWal=true, meta=false 2023-07-21 15:16:00,508 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41299,1689952542769; numProcessing=4 2023-07-21 15:16:00,508 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=112, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,41299,1689952542769, splitWal=true, meta=false 2023-07-21 15:16:00,509 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43113,1689952559498 with isa=jenkins-hbase17.apache.org/136.243.18.41:33915, startcode=1689952559786 2023-07-21 15:16:00,509 DEBUG [RS:1;jenkins-hbase17:33915] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:00,509 DEBUG [RS:0;jenkins-hbase17:33615] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@517ffc29, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:00,510 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=115, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,38527,1689952536414, splitWal=true, meta=true, isMeta: true 2023-07-21 15:16:00,510 DEBUG [RS:0;jenkins-hbase17:33615] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@554062de, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:00,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952560503,5,FailOnTimeoutGroup] 2023-07-21 15:16:00,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 15:16:00,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689952560511, completionTime=-1 2023-07-21 15:16:00,511 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-21 15:16:00,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 15:16:00,514 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414-splitting 2023-07-21 15:16:00,515 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414-splitting dir is empty, no logs to split. 2023-07-21 15:16:00,515 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,38527,1689952536414 WAL count=0, meta=true 2023-07-21 15:16:00,516 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35627, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:00,518 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414-splitting dir is empty, no logs to split. 2023-07-21 15:16:00,518 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,38527,1689952536414 WAL count=0, meta=true 2023-07-21 15:16:00,518 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,38527,1689952536414 WAL splitting is done? wals=0, meta=true 2023-07-21 15:16:00,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 15:16:00,521 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43113] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,521 DEBUG [RS:2;jenkins-hbase17:44429] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e28cdb8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:00,521 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:00,521 DEBUG [RS:2;jenkins-hbase17:44429] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a1740a5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:00,523 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:00,523 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:00,523 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41331 2023-07-21 15:16:00,523 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:33615 2023-07-21 15:16:00,523 INFO [RS:0;jenkins-hbase17:33615] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:00,523 INFO [RS:0;jenkins-hbase17:33615] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:00,523 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:00,524 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:16:00,524 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=116, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 15:16:00,527 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:00,528 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43113,1689952559498 with isa=jenkins-hbase17.apache.org/136.243.18.41:33615, startcode=1689952559656 2023-07-21 15:16:00,529 DEBUG [RS:0;jenkins-hbase17:33615] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:00,529 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=116, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:00,530 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ZKUtil(162): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,530 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33915,1689952559786] 2023-07-21 15:16:00,531 WARN [RS:1;jenkins-hbase17:33915] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:00,531 INFO [RS:1;jenkins-hbase17:33915] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:00,531 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,533 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:43775, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:00,534 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43113] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,534 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:00,534 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 15:16:00,535 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:44429 2023-07-21 15:16:00,535 INFO [RS:2;jenkins-hbase17:44429] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:00,535 INFO [RS:2;jenkins-hbase17:44429] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:00,535 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:00,535 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43113,1689952559498 with isa=jenkins-hbase17.apache.org/136.243.18.41:44429, startcode=1689952559937 2023-07-21 15:16:00,535 DEBUG [RS:2;jenkins-hbase17:44429] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:00,541 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:00,542 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:00,542 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41331 2023-07-21 15:16:00,543 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35169, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:00,543 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43113] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,543 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:00,543 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:16:00,545 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:00,545 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:00,545 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41331 2023-07-21 15:16:00,546 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:00,547 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ZKUtil(162): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,547 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ZKUtil(162): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,547 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ZKUtil(162): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,547 WARN [RS:2;jenkins-hbase17:44429] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:00,547 WARN [RS:0;jenkins-hbase17:33615] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:00,547 INFO [RS:2;jenkins-hbase17:44429] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:00,547 INFO [RS:0;jenkins-hbase17:33615] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:00,548 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,548 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ZKUtil(162): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,548 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,548 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,33615,1689952559656] 2023-07-21 15:16:00,548 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,44429,1689952559937] 2023-07-21 15:16:00,548 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ZKUtil(162): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,554 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:00,554 INFO [RS:1;jenkins-hbase17:33915] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:00,555 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ZKUtil(162): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,556 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ZKUtil(162): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,557 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ZKUtil(162): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,557 INFO [RS:1;jenkins-hbase17:33915] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:00,558 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:00,559 INFO [RS:2;jenkins-hbase17:44429] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:00,565 INFO [RS:1;jenkins-hbase17:33915] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:00,565 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,566 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:00,567 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=56ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 15:16:00,573 INFO [RS:2;jenkins-hbase17:44429] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:00,573 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ZKUtil(162): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,573 INFO [RS:2;jenkins-hbase17:44429] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:00,573 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,574 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:00,576 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,576 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ZKUtil(162): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,577 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,592 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,593 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ZKUtil(162): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,593 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:38527 this server is in the failed servers list 2023-07-21 15:16:00,593 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:00,594 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,594 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,594 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,594 DEBUG [RS:2;jenkins-hbase17:44429] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,593 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,594 DEBUG [RS:0;jenkins-hbase17:33615] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:00,594 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:00,595 INFO [RS:0;jenkins-hbase17:33615] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:00,595 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,595 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,595 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,595 DEBUG [RS:1;jenkins-hbase17:33915] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,596 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,596 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,596 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,596 INFO [RS:0;jenkins-hbase17:33615] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:00,596 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,598 INFO [RS:0;jenkins-hbase17:33615] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:00,598 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,598 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,599 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,599 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,599 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,612 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:00,612 INFO [RS:2;jenkins-hbase17:44429] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:00,612 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,44429,1689952559937-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,626 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,626 DEBUG [RS:0;jenkins-hbase17:33615] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:00,628 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,629 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,629 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,629 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,636 INFO [RS:1;jenkins-hbase17:33915] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:00,636 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33915,1689952559786-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,637 INFO [RS:2;jenkins-hbase17:44429] regionserver.Replication(203): jenkins-hbase17.apache.org,44429,1689952559937 started 2023-07-21 15:16:00,638 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,44429,1689952559937, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:44429, sessionid=0x1018872b3790013 2023-07-21 15:16:00,638 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:00,638 DEBUG [RS:2;jenkins-hbase17:44429] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,638 DEBUG [RS:2;jenkins-hbase17:44429] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,44429,1689952559937' 2023-07-21 15:16:00,638 DEBUG [RS:2;jenkins-hbase17:44429] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:00,638 DEBUG [RS:2;jenkins-hbase17:44429] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,44429,1689952559937' 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:00,639 DEBUG [RS:2;jenkins-hbase17:44429] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:00,639 INFO [RS:2;jenkins-hbase17:44429] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 15:16:00,642 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,643 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ZKUtil(398): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 15:16:00,643 INFO [RS:2;jenkins-hbase17:44429] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 15:16:00,643 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,643 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,645 INFO [RS:0;jenkins-hbase17:33615] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:00,645 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,33615,1689952559656-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,648 INFO [RS:1;jenkins-hbase17:33915] regionserver.Replication(203): jenkins-hbase17.apache.org,33915,1689952559786 started 2023-07-21 15:16:00,648 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33915,1689952559786, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33915, sessionid=0x1018872b3790012 2023-07-21 15:16:00,648 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:00,648 DEBUG [RS:1;jenkins-hbase17:33915] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,648 DEBUG [RS:1;jenkins-hbase17:33915] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33915,1689952559786' 2023-07-21 15:16:00,648 DEBUG [RS:1;jenkins-hbase17:33915] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33915,1689952559786' 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:00,649 DEBUG [RS:1;jenkins-hbase17:33915] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:00,650 DEBUG [RS:1;jenkins-hbase17:33915] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:00,650 INFO [RS:1;jenkins-hbase17:33915] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 15:16:00,650 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,650 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ZKUtil(398): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 15:16:00,650 INFO [RS:1;jenkins-hbase17:33915] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 15:16:00,650 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,650 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,658 INFO [RS:0;jenkins-hbase17:33615] regionserver.Replication(203): jenkins-hbase17.apache.org,33615,1689952559656 started 2023-07-21 15:16:00,658 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,33615,1689952559656, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:33615, sessionid=0x1018872b3790011 2023-07-21 15:16:00,658 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:00,658 DEBUG [RS:0;jenkins-hbase17:33615] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,658 DEBUG [RS:0;jenkins-hbase17:33615] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33615,1689952559656' 2023-07-21 15:16:00,658 DEBUG [RS:0;jenkins-hbase17:33615] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:00,659 DEBUG [RS:0;jenkins-hbase17:33615] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:00,659 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:00,659 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:00,659 DEBUG [RS:0;jenkins-hbase17:33615] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:00,659 DEBUG [RS:0;jenkins-hbase17:33615] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,33615,1689952559656' 2023-07-21 15:16:00,659 DEBUG [RS:0;jenkins-hbase17:33615] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:00,660 DEBUG [RS:0;jenkins-hbase17:33615] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:00,660 DEBUG [RS:0;jenkins-hbase17:33615] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:00,660 INFO [RS:0;jenkins-hbase17:33615] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 15:16:00,660 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,660 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ZKUtil(398): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 15:16:00,660 INFO [RS:0;jenkins-hbase17:33615] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 15:16:00,660 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,660 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:00,680 DEBUG [jenkins-hbase17:43113] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:16:00,680 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:00,680 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:00,680 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:00,680 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:00,680 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:00,682 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,44429,1689952559937, state=OPENING 2023-07-21 15:16:00,683 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:00,684 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:00,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=117, ppid=116, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,44429,1689952559937}] 2023-07-21 15:16:00,746 INFO [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C44429%2C1689952559937, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:00,752 INFO [RS:1;jenkins-hbase17:33915] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33915%2C1689952559786, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:00,765 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:00,765 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:00,765 INFO [RS:0;jenkins-hbase17:33615] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C33615%2C1689952559656, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:00,766 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:00,781 INFO [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937/jenkins-hbase17.apache.org%2C44429%2C1689952559937.1689952560747 2023-07-21 15:16:00,782 DEBUG [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:00,791 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:00,791 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:00,791 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:00,796 WARN [ReadOnlyZKClient-127.0.0.1:62052@0x3cf2cb6a] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 15:16:00,796 INFO [RS:1;jenkins-hbase17:33915] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786/jenkins-hbase17.apache.org%2C33915%2C1689952559786.1689952560753 2023-07-21 15:16:00,796 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:00,804 DEBUG [RS:1;jenkins-hbase17:33915] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:00,811 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58460, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:00,812 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44429] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:58460 deadline: 1689952620811, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,817 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:00,817 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:00,817 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:00,842 INFO [RS:0;jenkins-hbase17:33615] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656/jenkins-hbase17.apache.org%2C33615%2C1689952559656.1689952560766 2023-07-21 15:16:00,844 DEBUG [RS:0;jenkins-hbase17:33615] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:00,848 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:00,850 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:00,851 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58476, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:00,859 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:16:00,859 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:00,864 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C44429%2C1689952559937.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:00,878 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:00,878 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:00,878 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:00,888 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937/jenkins-hbase17.apache.org%2C44429%2C1689952559937.meta.1689952560865.meta 2023-07-21 15:16:00,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:00,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:00,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:00,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:16:00,897 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:16:00,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:16:00,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:00,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:16:00,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:16:00,900 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:16:00,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:16:00,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:16:00,902 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:16:00,913 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:00,913 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:00,913 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:00,914 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:16:00,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:00,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:00,916 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:16:00,923 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:16:00,923 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier/c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:16:00,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:00,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:16:00,924 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:16:00,925 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:16:00,925 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:16:00,933 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:16:00,933 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:16:00,933 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:00,934 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:16:00,936 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:16:00,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:16:00,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:16:00,941 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=146; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11491698400, jitterRate=0.07024781405925751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:16:00,941 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:16:00,942 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=117, masterSystemTime=1689952560848 2023-07-21 15:16:00,946 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:16:00,947 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:16:00,947 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,44429,1689952559937, state=OPEN 2023-07-21 15:16:00,948 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:00,949 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:00,950 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=117, resume processing ppid=116 2023-07-21 15:16:00,950 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=117, ppid=116, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,44429,1689952559937 in 265 msec 2023-07-21 15:16:00,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-21 15:16:00,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 431 msec 2023-07-21 15:16:01,095 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:01,175 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:01,176 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:41299 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41299 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:01,176 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:41299 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41299 2023-07-21 15:16:01,284 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41299 this server is in the failed servers list 2023-07-21 15:16:01,490 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41299 this server is in the failed servers list 2023-07-21 15:16:01,798 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41299 this server is in the failed servers list 2023-07-21 15:16:02,071 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1560ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1504ms 2023-07-21 15:16:02,303 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:41299 this server is in the failed servers list 2023-07-21 15:16:03,311 WARN [RS-EventLoopGroup-12-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:41299 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41299 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:03,312 DEBUG [RS-EventLoopGroup-12-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:41299 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:41299 2023-07-21 15:16:03,574 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3063ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3007ms 2023-07-21 15:16:03,843 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 15:16:03,843 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-21 15:16:05,028 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4517ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-21 15:16:05,028 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 15:16:05,030 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=7697a92683cfac49519e4a4111355983, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,39253,1689952540479, regionLocation=jenkins-hbase17.apache.org,39253,1689952540479, openSeqNum=10 2023-07-21 15:16:05,031 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,41299,1689952542769, regionLocation=jenkins-hbase17.apache.org,41299,1689952542769, openSeqNum=35 2023-07-21 15:16:05,031 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 15:16:05,031 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689952625031 2023-07-21 15:16:05,031 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689952685031 2023-07-21 15:16:05,031 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-21 15:16:05,050 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,38527,1689952536414 had 1 regions 2023-07-21 15:16:05,050 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,39253,1689952540479 had 1 regions 2023-07-21 15:16:05,050 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,41299,1689952542769 had 1 regions 2023-07-21 15:16:05,052 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,36355,1689952536596 had 0 regions 2023-07-21 15:16:05,053 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43113,1689952559498-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:05,053 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43113,1689952559498-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:05,053 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43113,1689952559498-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:05,053 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:43113, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:05,053 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:05,055 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=112, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,41299,1689952542769, splitWal=true, meta=false, isMeta: false 2023-07-21 15:16:05,055 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. is NOT online; state={7697a92683cfac49519e4a4111355983 state=OPEN, ts=1689952565031, server=jenkins-hbase17.apache.org,39253,1689952540479}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-21 15:16:05,056 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=113, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,39253,1689952540479, splitWal=true, meta=false, isMeta: false 2023-07-21 15:16:05,057 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=115, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,38527,1689952536414, splitWal=true, meta=true, isMeta: false 2023-07-21 15:16:05,057 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=114, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,36355,1689952536596, splitWal=true, meta=false, isMeta: false 2023-07-21 15:16:05,059 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769-splitting 2023-07-21 15:16:05,061 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479-splitting 2023-07-21 15:16:05,062 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,062 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,41299,1689952542769 WAL count=0, meta=false 2023-07-21 15:16:05,062 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,062 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,39253,1689952540479 WAL count=0, meta=false 2023-07-21 15:16:05,063 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,063 WARN [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase17.apache.org,39253,1689952540479/hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., unknown_server=jenkins-hbase17.apache.org,41299,1689952542769/hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:05,063 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,38527,1689952536414 WAL count=0, meta=false 2023-07-21 15:16:05,064 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596-splitting 2023-07-21 15:16:05,065 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,065 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,36355,1689952536596 WAL count=0, meta=false 2023-07-21 15:16:05,066 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,066 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,41299,1689952542769 WAL count=0, meta=false 2023-07-21 15:16:05,066 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,41299,1689952542769 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:05,068 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,38527,1689952536414-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,071 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,38527,1689952536414 WAL count=0, meta=false 2023-07-21 15:16:05,071 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,38527,1689952536414 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:05,073 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,073 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,39253,1689952540479 WAL count=0, meta=false 2023-07-21 15:16:05,073 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,39253,1689952540479 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:05,074 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,41299,1689952542769 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41299,1689952542769-splitting does not exist. 2023-07-21 15:16:05,074 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596-splitting dir is empty, no logs to split. 2023-07-21 15:16:05,075 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,36355,1689952536596 WAL count=0, meta=false 2023-07-21 15:16:05,075 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,36355,1689952536596 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:05,078 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,39253,1689952540479 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,39253,1689952540479-splitting does not exist. 2023-07-21 15:16:05,079 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,36355,1689952536596 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36355,1689952536596-splitting does not exist. 2023-07-21 15:16:05,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN}] 2023-07-21 15:16:05,083 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN}] 2023-07-21 15:16:05,086 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,38527,1689952536414 after splitting done 2023-07-21 15:16:05,086 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase17.apache.org,38527,1689952536414 from processing; numProcessing=3 2023-07-21 15:16:05,086 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN 2023-07-21 15:16:05,087 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,36355,1689952536596 after splitting done 2023-07-21 15:16:05,087 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase17.apache.org,36355,1689952536596 from processing; numProcessing=2 2023-07-21 15:16:05,087 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=119, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN 2023-07-21 15:16:05,089 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:05,090 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=119, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:05,090 DEBUG [jenkins-hbase17:43113] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:16:05,090 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:05,090 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:05,090 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:05,090 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:05,090 DEBUG [jenkins-hbase17:43113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-21 15:16:05,093 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,38527,1689952536414, splitWal=true, meta=true in 4.6520 sec 2023-07-21 15:16:05,093 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:05,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,36355,1689952536596, splitWal=true, meta=false in 4.6550 sec 2023-07-21 15:16:05,093 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=119 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:05,093 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952565093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952565093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952565093"}]},"ts":"1689952565093"} 2023-07-21 15:16:05,093 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952565093"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952565093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952565093"}]},"ts":"1689952565093"} 2023-07-21 15:16:05,097 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=120, ppid=119, state=RUNNABLE; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,33915,1689952559786}] 2023-07-21 15:16:05,098 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=118, state=RUNNABLE; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,44429,1689952559937}] 2023-07-21 15:16:05,252 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:05,252 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:05,254 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45616, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:05,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:05,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:05,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 603dc738ccec189e3bde34ff84c46389, NAME => 'hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:05,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7697a92683cfac49519e4a4111355983, NAME => 'hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. service=MultiRowMutationService 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:05,259 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,260 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,260 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,261 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:16:05,261 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:16:05,262 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 603dc738ccec189e3bde34ff84c46389 columnFamilyName m 2023-07-21 15:16:05,263 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:16:05,263 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:16:05,263 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7697a92683cfac49519e4a4111355983 columnFamilyName info 2023-07-21 15:16:05,271 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:16:05,271 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:16:05,272 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:16:05,273 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:16:05,277 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/f7f6dd522e854d8fab91aaec79abb8df 2023-07-21 15:16:05,277 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(310): Store=7697a92683cfac49519e4a4111355983/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:05,278 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:16:05,278 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:16:05,278 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(310): Store=603dc738ccec189e3bde34ff84c46389/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:05,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:05,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:05,284 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7697a92683cfac49519e4a4111355983; next sequenceid=21; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9499062400, jitterRate=-0.11533087491989136}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:05,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:16:05,284 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 603dc738ccec189e3bde34ff84c46389; next sequenceid=77; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5ca5f2dd, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:05,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:05,285 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., pid=120, masterSystemTime=1689952565252 2023-07-21 15:16:05,288 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., pid=121, masterSystemTime=1689952565252 2023-07-21 15:16:05,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:05,290 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:05,290 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=119 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPEN, openSeqNum=21, regionLocation=jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:05,290 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952565290"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952565290"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952565290"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952565290"}]},"ts":"1689952565290"} 2023-07-21 15:16:05,290 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:05,291 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:05,291 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, openSeqNum=77, regionLocation=jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:05,291 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952565291"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952565291"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952565291"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952565291"}]},"ts":"1689952565291"} 2023-07-21 15:16:05,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=120, resume processing ppid=119 2023-07-21 15:16:05,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, ppid=119, state=SUCCESS; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,33915,1689952559786 in 195 msec 2023-07-21 15:16:05,294 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=118 2023-07-21 15:16:05,294 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=118, state=SUCCESS; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,44429,1689952559937 in 194 msec 2023-07-21 15:16:05,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=113 2023-07-21 15:16:05,295 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,39253,1689952540479 after splitting done 2023-07-21 15:16:05,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN in 212 msec 2023-07-21 15:16:05,295 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,39253,1689952540479 from processing; numProcessing=1 2023-07-21 15:16:05,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=112 2023-07-21 15:16:05,295 INFO [PEWorker-4] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,41299,1689952542769 after splitting done 2023-07-21 15:16:05,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=112, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN in 214 msec 2023-07-21 15:16:05,295 DEBUG [PEWorker-4] master.DeadServer(114): Removed jenkins-hbase17.apache.org,41299,1689952542769 from processing; numProcessing=0 2023-07-21 15:16:05,296 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,39253,1689952540479, splitWal=true, meta=false in 4.8640 sec 2023-07-21 15:16:05,296 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,41299,1689952542769, splitWal=true, meta=false in 4.8670 sec 2023-07-21 15:16:05,322 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 15:16:05,323 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 15:16:05,331 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:05,332 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:05,332 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:05,333 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-21 15:16:05,336 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43113,1689952559498] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 15:16:06,056 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-21 15:16:06,065 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:06,068 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:06,086 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:16:06,088 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 15:16:06,088 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 6.010sec 2023-07-21 15:16:06,088 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 15:16:06,088 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:06,090 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=122, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 15:16:06,090 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 15:16:06,092 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=122, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 15:16:06,095 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 15:16:06,096 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=122, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 15:16:06,098 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,099 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e empty. 2023-07-21 15:16:06,099 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,099 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 15:16:06,101 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 15:16:06,101 INFO [master/jenkins-hbase17:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 15:16:06,105 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:06,106 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:06,106 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 15:16:06,106 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 15:16:06,106 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43113,1689952559498-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 15:16:06,106 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43113,1689952559498-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 15:16:06,109 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 15:16:06,116 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 15:16:06,117 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => e66f96fe3a93ede34be690ff9e55183e, NAME => 'hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.tmp 2023-07-21 15:16:06,126 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:06,127 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing e66f96fe3a93ede34be690ff9e55183e, disabling compactions & flushes 2023-07-21 15:16:06,127 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,127 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,127 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. after waiting 0 ms 2023-07-21 15:16:06,127 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,127 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,127 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:06,129 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=122, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 15:16:06,130 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952566130"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952566130"}]},"ts":"1689952566130"} 2023-07-21 15:16:06,131 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 15:16:06,132 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=122, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 15:16:06,132 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952566132"}]},"ts":"1689952566132"} 2023-07-21 15:16:06,133 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 15:16:06,135 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:06,135 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:06,135 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:06,135 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:06,135 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:06,136 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=123, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN}] 2023-07-21 15:16:06,137 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN 2023-07-21 15:16:06,138 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, ppid=122, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,44429,1689952559937; forceNewPlan=false, retain=false 2023-07-21 15:16:06,169 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x14716ec5 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:06,175 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44c0988d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:06,176 DEBUG [hconnection-0x575e5e5d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:06,178 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58478, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:06,183 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-21 15:16:06,183 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x14716ec5 to 127.0.0.1:62052 2023-07-21 15:16:06,184 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,186 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase17.apache.org:43113 after: jenkins-hbase17.apache.org:43113 2023-07-21 15:16:06,186 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x7399378c to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:06,193 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56903314, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:06,193 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:06,194 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 15:16:06,197 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 15:16:06,199 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:16:06,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43113] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 15:16:06,201 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x66b5f915 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:06,207 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b3b6f14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:06,207 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:06,209 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:06,210 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:06,211 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018872b379001b connected 2023-07-21 15:16:06,211 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:06,213 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:06,215 DEBUG [Listener at localhost.localdomain/38883] client.ConnectionImplementation(720): Table hbase:quota not enabled 2023-07-21 15:16:06,288 INFO [jenkins-hbase17:43113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 15:16:06,290 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:06,291 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952566290"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952566290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952566290"}]},"ts":"1689952566290"} 2023-07-21 15:16:06,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; OpenRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,44429,1689952559937}] 2023-07-21 15:16:06,317 DEBUG [Listener at localhost.localdomain/38883] client.ConnectionImplementation(720): Table hbase:quota not enabled 2023-07-21 15:16:06,400 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:06,419 DEBUG [Listener at localhost.localdomain/38883] client.ConnectionImplementation(720): Table hbase:quota not enabled 2023-07-21 15:16:06,450 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,450 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e66f96fe3a93ede34be690ff9e55183e, NAME => 'hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:06,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:06,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,452 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,454 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/q 2023-07-21 15:16:06,454 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/q 2023-07-21 15:16:06,455 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e66f96fe3a93ede34be690ff9e55183e columnFamilyName q 2023-07-21 15:16:06,455 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(310): Store=e66f96fe3a93ede34be690ff9e55183e/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:06,456 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,459 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/u 2023-07-21 15:16:06,460 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/u 2023-07-21 15:16:06,460 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e66f96fe3a93ede34be690ff9e55183e columnFamilyName u 2023-07-21 15:16:06,462 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(310): Store=e66f96fe3a93ede34be690ff9e55183e/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:06,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 15:16:06,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 15:16:06,475 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened e66f96fe3a93ede34be690ff9e55183e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11268797760, jitterRate=0.049488574266433716}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 15:16:06,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:06,476 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e., pid=124, masterSystemTime=1689952566447 2023-07-21 15:16:06,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,484 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:06,484 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952566483"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952566483"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952566483"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952566483"}]},"ts":"1689952566483"} 2023-07-21 15:16:06,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-21 15:16:06,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; OpenRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,44429,1689952559937 in 190 msec 2023-07-21 15:16:06,488 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=123, resume processing ppid=122 2023-07-21 15:16:06,488 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, ppid=122, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN in 351 msec 2023-07-21 15:16:06,489 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=122, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 15:16:06,489 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689952566489"}]},"ts":"1689952566489"} 2023-07-21 15:16:06,491 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 15:16:06,493 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=122, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 15:16:06,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, state=SUCCESS; CreateTableProcedure table=hbase:quota in 404 msec 2023-07-21 15:16:06,526 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-21 15:16:06,526 INFO [Listener at localhost.localdomain/38883] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 15:16:06,526 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7399378c to 127.0.0.1:62052 2023-07-21 15:16:06,526 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,526 DEBUG [Listener at localhost.localdomain/38883] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 15:16:06,526 DEBUG [Listener at localhost.localdomain/38883] util.JVMClusterUtil(257): Found active master hash=1689360050, stopped=false 2023-07-21 15:16:06,527 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:16:06,527 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:16:06,527 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 15:16:06,527 INFO [Listener at localhost.localdomain/38883] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:06,528 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:06,528 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:06,528 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:06,528 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:06,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:06,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:06,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:06,529 INFO [Listener at localhost.localdomain/38883] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 15:16:06,531 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:06,532 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:06,532 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3cf2cb6a to 127.0.0.1:62052 2023-07-21 15:16:06,533 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,533 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33615,1689952559656' ***** 2023-07-21 15:16:06,533 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:06,533 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,33915,1689952559786' ***** 2023-07-21 15:16:06,533 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:06,533 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:06,533 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,44429,1689952559937' ***** 2023-07-21 15:16:06,533 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:06,533 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:06,534 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:06,537 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:06,538 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:06,539 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:06,557 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 15:16:06,561 INFO [RS:2;jenkins-hbase17:44429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a4c2592{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:06,561 INFO [RS:1;jenkins-hbase17:33915] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@bff2ab{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:06,561 INFO [RS:0;jenkins-hbase17:33615] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7c78d807{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:06,561 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 15:16:06,561 INFO [RS:2;jenkins-hbase17:44429] server.AbstractConnector(383): Stopped ServerConnector@4aea5bd2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:06,561 INFO [RS:1;jenkins-hbase17:33915] server.AbstractConnector(383): Stopped ServerConnector@4f2d6bc1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:06,561 INFO [RS:0;jenkins-hbase17:33615] server.AbstractConnector(383): Stopped ServerConnector@1712c33c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:06,561 INFO [RS:1;jenkins-hbase17:33915] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:06,561 INFO [RS:2;jenkins-hbase17:44429] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:06,561 INFO [RS:0;jenkins-hbase17:33615] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:06,562 INFO [RS:1;jenkins-hbase17:33915] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@647710e0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:06,562 INFO [RS:2;jenkins-hbase17:44429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7cc0a20a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:06,562 INFO [RS:1;jenkins-hbase17:33915] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@504be1f3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:06,562 INFO [RS:2;jenkins-hbase17:44429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d618ce1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:06,562 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:16:06,562 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-21 15:16:06,562 INFO [RS:0;jenkins-hbase17:33615] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e7a882b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:06,563 INFO [RS:0;jenkins-hbase17:33615] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@99d5e88{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:06,564 INFO [RS:2;jenkins-hbase17:44429] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:06,564 INFO [RS:2;jenkins-hbase17:44429] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:06,564 INFO [RS:2;jenkins-hbase17:44429] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:06,564 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(3305): Received CLOSE for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,564 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:06,565 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(3305): Received CLOSE for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:06,565 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:06,565 DEBUG [RS:2;jenkins-hbase17:44429] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40ef2bd5 to 127.0.0.1:62052 2023-07-21 15:16:06,565 DEBUG [RS:2;jenkins-hbase17:44429] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,565 INFO [RS:2;jenkins-hbase17:44429] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:06,565 INFO [RS:2;jenkins-hbase17:44429] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:06,565 INFO [RS:2;jenkins-hbase17:44429] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:06,565 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 15:16:06,571 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-21 15:16:06,571 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1478): Online Regions={e66f96fe3a93ede34be690ff9e55183e=hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e., 1588230740=hbase:meta,,1.1588230740, 603dc738ccec189e3bde34ff84c46389=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.} 2023-07-21 15:16:06,571 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1504): Waiting on 1588230740, 603dc738ccec189e3bde34ff84c46389, e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:06,572 INFO [RS:1;jenkins-hbase17:33915] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:06,572 INFO [RS:1;jenkins-hbase17:33915] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:06,580 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:06,580 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:16:06,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing e66f96fe3a93ede34be690ff9e55183e, disabling compactions & flushes 2023-07-21 15:16:06,581 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:16:06,580 INFO [RS:0;jenkins-hbase17:33615] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:06,580 INFO [RS:1;jenkins-hbase17:33915] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:06,581 INFO [RS:0;jenkins-hbase17:33615] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:06,581 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:16:06,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,581 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:16:06,581 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(3305): Received CLOSE for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:06,581 INFO [RS:0;jenkins-hbase17:33615] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:06,582 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:16:06,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.06 KB heapSize=5.87 KB 2023-07-21 15:16:06,582 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:06,582 DEBUG [RS:1;jenkins-hbase17:33915] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c9b8472 to 127.0.0.1:62052 2023-07-21 15:16:06,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. after waiting 0 ms 2023-07-21 15:16:06,582 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:06,583 DEBUG [RS:0;jenkins-hbase17:33615] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5fb809ae to 127.0.0.1:62052 2023-07-21 15:16:06,583 DEBUG [RS:0;jenkins-hbase17:33615] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,583 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33615,1689952559656; all regions closed. 2023-07-21 15:16:06,583 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,583 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7697a92683cfac49519e4a4111355983, disabling compactions & flushes 2023-07-21 15:16:06,583 DEBUG [RS:1;jenkins-hbase17:33915] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,584 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 15:16:06,584 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1478): Online Regions={7697a92683cfac49519e4a4111355983=hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.} 2023-07-21 15:16:06,584 DEBUG [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1504): Waiting on 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:06,584 DEBUG [RS:0;jenkins-hbase17:33615] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 15:16:06,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:06,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:06,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. after waiting 0 ms 2023-07-21 15:16:06,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:06,597 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 15:16:06,597 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 15:16:06,604 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 15:16:06,606 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 15:16:06,617 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:06,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 15:16:06,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:06,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:06,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 603dc738ccec189e3bde34ff84c46389, disabling compactions & flushes 2023-07-21 15:16:06,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:06,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:06,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. after waiting 0 ms 2023-07-21 15:16:06,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:06,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 603dc738ccec189e3bde34ff84c46389 1/1 column families, dataSize=229 B heapSize=640 B 2023-07-21 15:16:06,627 DEBUG [RS:0;jenkins-hbase17:33615] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:06,627 INFO [RS:0;jenkins-hbase17:33615] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C33615%2C1689952559656:(num 1689952560766) 2023-07-21 15:16:06,628 DEBUG [RS:0;jenkins-hbase17:33615] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,628 INFO [RS:0;jenkins-hbase17:33615] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:06,628 INFO [RS:0;jenkins-hbase17:33615] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:06,628 INFO [RS:0;jenkins-hbase17:33615] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:06,628 INFO [RS:0;jenkins-hbase17:33615] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:06,628 INFO [RS:0;jenkins-hbase17:33615] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:06,628 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:06,630 INFO [RS:0;jenkins-hbase17:33615] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33615 2023-07-21 15:16:06,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/recovered.edits/23.seqid, newMaxSeqId=23, maxSeqId=20 2023-07-21 15:16:06,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:06,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:16:06,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:06,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=157 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/018c0ea790dd452bbb94d051c83f4c99 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33615,1689952559656 2023-07-21 15:16:06,674 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33615,1689952559656] 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:06,674 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33615,1689952559656; numProcessing=1 2023-07-21 15:16:06,674 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:06,675 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33615,1689952559656 already deleted, retry=false 2023-07-21 15:16:06,675 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,33615,1689952559656 expired; onlineServers=2 2023-07-21 15:16:06,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=229 B at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/a9f6458d40a74267b626b8d3db4c94e2 2023-07-21 15:16:06,696 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=157 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/table/f04365d3a5c54c78abbc1d4b48d634d7 2023-07-21 15:16:06,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/a9f6458d40a74267b626b8d3db4c94e2 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2 2023-07-21 15:16:06,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/018c0ea790dd452bbb94d051c83f4c99 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99 2023-07-21 15:16:06,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2, entries=2, sequenceid=80, filesize=5.0 K 2023-07-21 15:16:06,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~229 B/229, heapSize ~624 B/624, currentSize=0 B/0 for 603dc738ccec189e3bde34ff84c46389 in 88ms, sequenceid=80, compaction requested=true 2023-07-21 15:16:06,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99, entries=26, sequenceid=157, filesize=7.7 K 2023-07-21 15:16:06,740 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/table/f04365d3a5c54c78abbc1d4b48d634d7 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/f04365d3a5c54c78abbc1d4b48d634d7 2023-07-21 15:16:06,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/recovered.edits/83.seqid, newMaxSeqId=83, maxSeqId=76 2023-07-21 15:16:06,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:06,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:06,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:06,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:06,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/f04365d3a5c54c78abbc1d4b48d634d7, entries=2, sequenceid=157, filesize=4.7 K 2023-07-21 15:16:06,768 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.06 KB/3132, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 186ms, sequenceid=157, compaction requested=false 2023-07-21 15:16:06,773 DEBUG [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 15:16:06,784 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,33915,1689952559786; all regions closed. 2023-07-21 15:16:06,784 DEBUG [RS:1;jenkins-hbase17:33915] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 15:16:06,814 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/recovered.edits/160.seqid, newMaxSeqId=160, maxSeqId=145 2023-07-21 15:16:06,815 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:06,816 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:06,816 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:16:06,816 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:06,819 DEBUG [RS:1;jenkins-hbase17:33915] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:06,819 INFO [RS:1;jenkins-hbase17:33915] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C33915%2C1689952559786:(num 1689952560753) 2023-07-21 15:16:06,819 DEBUG [RS:1;jenkins-hbase17:33915] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,819 INFO [RS:1;jenkins-hbase17:33915] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:06,820 INFO [RS:1;jenkins-hbase17:33915] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:06,820 INFO [RS:1;jenkins-hbase17:33915] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:06,820 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:06,820 INFO [RS:1;jenkins-hbase17:33915] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:06,820 INFO [RS:1;jenkins-hbase17:33915] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:06,821 INFO [RS:1;jenkins-hbase17:33915] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:33915 2023-07-21 15:16:06,824 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:06,824 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:06,824 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,33915,1689952559786 2023-07-21 15:16:06,825 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,33915,1689952559786] 2023-07-21 15:16:06,825 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33915,1689952559786; numProcessing=2 2023-07-21 15:16:06,925 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:06,925 INFO [RS:1;jenkins-hbase17:33915] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33915,1689952559786; zookeeper connection closed. 2023-07-21 15:16:06,926 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33915-0x1018872b3790012, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:06,926 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5f4d9ed] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5f4d9ed 2023-07-21 15:16:06,927 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33915,1689952559786 already deleted, retry=false 2023-07-21 15:16:06,927 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,33915,1689952559786 expired; onlineServers=1 2023-07-21 15:16:06,934 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:06,934 INFO [RS:0;jenkins-hbase17:33615] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,33615,1689952559656; zookeeper connection closed. 2023-07-21 15:16:06,934 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:33615-0x1018872b3790011, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:06,935 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@718f1b53] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@718f1b53 2023-07-21 15:16:06,973 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,44429,1689952559937; all regions closed. 2023-07-21 15:16:06,973 DEBUG [RS:2;jenkins-hbase17:44429] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 15:16:06,979 DEBUG [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:06,979 INFO [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C44429%2C1689952559937.meta:.meta(num 1689952560865) 2023-07-21 15:16:06,984 DEBUG [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:06,984 INFO [RS:2;jenkins-hbase17:44429] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C44429%2C1689952559937:(num 1689952560747) 2023-07-21 15:16:06,984 DEBUG [RS:2;jenkins-hbase17:44429] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,984 INFO [RS:2;jenkins-hbase17:44429] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:06,984 INFO [RS:2;jenkins-hbase17:44429] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:06,985 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:06,985 INFO [RS:2;jenkins-hbase17:44429] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:44429 2023-07-21 15:16:06,989 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,44429,1689952559937 2023-07-21 15:16:06,989 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:06,992 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,44429,1689952559937] 2023-07-21 15:16:06,993 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,44429,1689952559937; numProcessing=3 2023-07-21 15:16:06,994 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,44429,1689952559937 already deleted, retry=false 2023-07-21 15:16:06,994 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,44429,1689952559937 expired; onlineServers=0 2023-07-21 15:16:06,994 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43113,1689952559498' ***** 2023-07-21 15:16:06,994 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 15:16:06,995 DEBUG [M:0;jenkins-hbase17:43113] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@608e0efc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:06,995 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:06,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:06,997 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:06,997 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:06,997 INFO [M:0;jenkins-hbase17:43113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3bd9010b{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:16:06,997 INFO [M:0;jenkins-hbase17:43113] server.AbstractConnector(383): Stopped ServerConnector@34017405{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:06,998 INFO [M:0;jenkins-hbase17:43113] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:06,998 INFO [M:0;jenkins-hbase17:43113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76613683{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:06,998 INFO [M:0;jenkins-hbase17:43113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6473921d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:06,998 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43113,1689952559498 2023-07-21 15:16:06,998 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43113,1689952559498; all regions closed. 2023-07-21 15:16:06,998 DEBUG [M:0;jenkins-hbase17:43113] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:06,998 INFO [M:0;jenkins-hbase17:43113] master.HMaster(1491): Stopping master jetty server 2023-07-21 15:16:06,999 INFO [M:0;jenkins-hbase17:43113] server.AbstractConnector(383): Stopped ServerConnector@3c0c3de6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:06,999 DEBUG [M:0;jenkins-hbase17:43113] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 15:16:06,999 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 15:16:06,999 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952560503] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952560503,5,FailOnTimeoutGroup] 2023-07-21 15:16:06,999 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952560503] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952560503,5,FailOnTimeoutGroup] 2023-07-21 15:16:06,999 DEBUG [M:0;jenkins-hbase17:43113] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 15:16:07,002 INFO [M:0;jenkins-hbase17:43113] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 15:16:07,002 INFO [M:0;jenkins-hbase17:43113] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 15:16:07,002 INFO [M:0;jenkins-hbase17:43113] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:07,002 DEBUG [M:0;jenkins-hbase17:43113] master.HMaster(1512): Stopping service threads 2023-07-21 15:16:07,002 INFO [M:0;jenkins-hbase17:43113] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 15:16:07,003 ERROR [M:0;jenkins-hbase17:43113] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 15:16:07,003 INFO [M:0;jenkins-hbase17:43113] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 15:16:07,003 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 15:16:07,003 DEBUG [M:0;jenkins-hbase17:43113] zookeeper.ZKUtil(398): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 15:16:07,003 WARN [M:0;jenkins-hbase17:43113] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 15:16:07,003 INFO [M:0;jenkins-hbase17:43113] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 15:16:07,004 INFO [M:0;jenkins-hbase17:43113] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 15:16:07,005 DEBUG [M:0;jenkins-hbase17:43113] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:16:07,005 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:07,005 DEBUG [M:0;jenkins-hbase17:43113] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:07,005 DEBUG [M:0;jenkins-hbase17:43113] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:16:07,005 DEBUG [M:0;jenkins-hbase17:43113] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:07,005 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.35 KB heapSize=54.93 KB 2023-07-21 15:16:07,022 INFO [M:0;jenkins-hbase17:43113] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.35 KB at sequenceid=934 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/45a76d901c7244edb2b9ffb0bc3128fd 2023-07-21 15:16:07,034 DEBUG [M:0;jenkins-hbase17:43113] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/45a76d901c7244edb2b9ffb0bc3128fd as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/45a76d901c7244edb2b9ffb0bc3128fd 2023-07-21 15:16:07,042 INFO [M:0;jenkins-hbase17:43113] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/45a76d901c7244edb2b9ffb0bc3128fd, entries=13, sequenceid=934, filesize=7.2 K 2023-07-21 15:16:07,043 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegion(2948): Finished flush of dataSize ~45.35 KB/46440, heapSize ~54.91 KB/56232, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 38ms, sequenceid=934, compaction requested=false 2023-07-21 15:16:07,058 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:07,058 DEBUG [M:0;jenkins-hbase17:43113] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:16:07,066 INFO [M:0;jenkins-hbase17:43113] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 15:16:07,066 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:07,067 INFO [M:0;jenkins-hbase17:43113] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43113 2023-07-21 15:16:07,070 DEBUG [M:0;jenkins-hbase17:43113] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,43113,1689952559498 already deleted, retry=false 2023-07-21 15:16:07,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:07,093 INFO [RS:2;jenkins-hbase17:44429] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,44429,1689952559937; zookeeper connection closed. 2023-07-21 15:16:07,093 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44429-0x1018872b3790013, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:07,093 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3c4091f4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3c4091f4 2023-07-21 15:16:07,093 INFO [Listener at localhost.localdomain/38883] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 15:16:07,193 INFO [M:0;jenkins-hbase17:43113] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43113,1689952559498; zookeeper connection closed. 2023-07-21 15:16:07,193 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:07,194 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43113-0x1018872b3790010, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:07,194 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:09,196 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:09,197 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43821 2023-07-21 15:16:09,198 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,198 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,199 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43821 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:09,203 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:438210x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:09,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43821-0x1018872b379001c connected 2023-07-21 15:16:09,205 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:09,205 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:09,205 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:09,206 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43821 2023-07-21 15:16:09,206 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43821 2023-07-21 15:16:09,207 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43821 2023-07-21 15:16:09,208 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43821 2023-07-21 15:16:09,208 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43821 2023-07-21 15:16:09,210 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:09,210 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:09,210 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:09,211 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 15:16:09,211 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:09,211 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:09,211 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:09,211 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 41793 2023-07-21 15:16:09,212 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:09,215 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,216 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c7d6ff4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:09,216 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,216 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a45b6fe{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:09,322 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:09,323 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:09,323 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:09,324 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:09,325 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,325 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@395f40d9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-41793-hbase-server-2_4_18-SNAPSHOT_jar-_-any-895843695414026219/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:16:09,327 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@58b12215{HTTP/1.1, (http/1.1)}{0.0.0.0:41793} 2023-07-21 15:16:09,327 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @42138ms 2023-07-21 15:16:09,327 INFO [Listener at localhost.localdomain/38883] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3, hbase.cluster.distributed=false 2023-07-21 15:16:09,328 DEBUG [pool-518-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:09,337 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:09,338 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41609 2023-07-21 15:16:09,339 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:09,340 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:09,340 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,341 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,342 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41609 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:09,345 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:416090x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:09,346 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:416090x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:09,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41609-0x1018872b379001d connected 2023-07-21 15:16:09,347 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:09,347 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:09,347 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41609 2023-07-21 15:16:09,348 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41609 2023-07-21 15:16:09,348 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41609 2023-07-21 15:16:09,348 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41609 2023-07-21 15:16:09,349 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41609 2023-07-21 15:16:09,350 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:09,351 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:09,351 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:09,351 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:09,351 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:09,351 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:09,352 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:09,352 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 46667 2023-07-21 15:16:09,352 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:09,357 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,357 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7bfc31fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:09,357 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,358 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@45fd876{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:09,448 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:09,449 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:09,449 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:09,449 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:16:09,449 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,450 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27a61248{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-46667-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8502733349710796162/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:09,452 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@2b376f34{HTTP/1.1, (http/1.1)}{0.0.0.0:46667} 2023-07-21 15:16:09,452 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @42263ms 2023-07-21 15:16:09,461 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:09,461 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,461 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,461 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:09,462 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,462 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:09,462 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:09,462 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36003 2023-07-21 15:16:09,462 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:09,464 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:09,465 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,466 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,466 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36003 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:09,470 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:360030x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:09,471 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:360030x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:09,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36003-0x1018872b379001e connected 2023-07-21 15:16:09,472 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:09,473 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:09,473 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36003 2023-07-21 15:16:09,473 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36003 2023-07-21 15:16:09,473 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36003 2023-07-21 15:16:09,474 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36003 2023-07-21 15:16:09,474 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36003 2023-07-21 15:16:09,476 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:09,476 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:09,477 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:09,477 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:09,478 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:09,478 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:09,478 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:09,479 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 40299 2023-07-21 15:16:09,479 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:09,483 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,484 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@450aba01{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:09,484 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,484 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b3ffca7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:09,580 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:09,580 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:09,580 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:09,580 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:09,581 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,582 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6ca63760{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-40299-hbase-server-2_4_18-SNAPSHOT_jar-_-any-14307262366206493/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:09,583 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@3975476a{HTTP/1.1, (http/1.1)}{0.0.0.0:40299} 2023-07-21 15:16:09,584 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @42395ms 2023-07-21 15:16:09,592 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:09,593 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,593 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,593 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:09,593 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:09,593 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:09,593 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:09,594 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:35121 2023-07-21 15:16:09,594 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:09,596 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:09,596 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,597 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,598 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35121 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:09,601 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:351210x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:09,602 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:351210x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:09,603 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35121-0x1018872b379001f connected 2023-07-21 15:16:09,604 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:09,604 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:09,605 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35121 2023-07-21 15:16:09,605 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35121 2023-07-21 15:16:09,605 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35121 2023-07-21 15:16:09,606 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35121 2023-07-21 15:16:09,606 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35121 2023-07-21 15:16:09,608 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:09,608 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:09,608 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:09,609 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:09,609 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:09,609 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:09,609 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:09,610 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 34541 2023-07-21 15:16:09,610 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:09,611 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,611 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@66a621a5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:09,612 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,612 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f3c2ab8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:09,711 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:09,711 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:09,711 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:09,712 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 15:16:09,712 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:09,713 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27f832bf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-34541-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3090574374630448689/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:09,714 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@61d3f6e5{HTTP/1.1, (http/1.1)}{0.0.0.0:34541} 2023-07-21 15:16:09,714 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @42525ms 2023-07-21 15:16:09,716 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:09,719 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7b2ccb1f{HTTP/1.1, (http/1.1)}{0.0.0.0:33921} 2023-07-21 15:16:09,719 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @42531ms 2023-07-21 15:16:09,720 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,720 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:16:09,721 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,722 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:09,722 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:09,722 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:09,722 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:09,722 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:09,724 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:09,725 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,43821,1689952569195 from backup master directory 2023-07-21 15:16:09,726 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:09,726 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,726 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:09,726 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 15:16:09,726 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,742 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:09,764 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x071edc8b to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:09,769 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@372b0502, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:09,769 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 15:16:09,770 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 15:16:09,770 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:09,776 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498-dead as it is dead 2023-07-21 15:16:09,777 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498-dead/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 2023-07-21 15:16:09,778 INFO [master/jenkins-hbase17:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498-dead/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 after 1ms 2023-07-21 15:16:09,779 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498-dead/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 2023-07-21 15:16:09,779 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43113,1689952559498-dead 2023-07-21 15:16:09,779 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,782 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43821%2C1689952569195, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43821,1689952569195, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/oldWALs, maxLogs=10 2023-07-21 15:16:09,793 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:09,794 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:09,794 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:09,800 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/WALs/jenkins-hbase17.apache.org,43821,1689952569195/jenkins-hbase17.apache.org%2C43821%2C1689952569195.1689952569782 2023-07-21 15:16:09,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK]] 2023-07-21 15:16:09,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:09,800 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:09,801 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:09,801 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:09,803 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:09,804 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 15:16:09,804 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 15:16:09,812 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/31d5965468314798babf1c1ceecb489d 2023-07-21 15:16:09,816 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/45a76d901c7244edb2b9ffb0bc3128fd 2023-07-21 15:16:09,816 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:09,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-21 15:16:09,817 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 2023-07-21 15:16:09,821 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 128, firstSequenceIdInLog=824, maxSequenceIdInLog=936, path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 2023-07-21 15:16:09,822 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase17.apache.org%2C43113%2C1689952559498.1689952560250 2023-07-21 15:16:09,826 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 15:16:09,829 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/936.seqid, newMaxSeqId=936, maxSeqId=822 2023-07-21 15:16:09,830 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=937; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9974656640, jitterRate=-0.07103770971298218}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:09,830 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:16:09,830 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 15:16:09,831 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 15:16:09,831 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 15:16:09,831 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 15:16:09,832 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 15:16:09,841 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-21 15:16:09,841 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-21 15:16:09,841 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-21 15:16:09,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-21 15:16:09,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-21 15:16:09,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE 2023-07-21 15:16:09,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,33925,1689952536167, splitWal=true, meta=false 2023-07-21 15:16:09,842 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-21 15:16:09,843 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:16:09,843 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:16:09,843 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-21 15:16:09,844 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:16:09,844 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:16:09,844 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-21 15:16:09,844 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE 2023-07-21 15:16:09,844 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=70, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 15:16:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=71, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:16:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:16:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-21 15:16:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=78, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 15:16:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=79, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:16:09,845 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:16:09,846 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-21 15:16:09,846 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=86, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:16:09,846 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=89, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-21 15:16:09,846 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=90, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-21 15:16:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=91, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689952552964 type: FLUSH version: 2 ttl: 0 ) 2023-07-21 15:16:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:16:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-21 15:16:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=98, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:16:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-21 15:16:09,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=102, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-21 15:16:09,848 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=103, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:09,848 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:09,848 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:09,848 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-21 15:16:09,848 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=111, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-21 15:16:09,848 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=112, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,41299,1689952542769, splitWal=true, meta=false 2023-07-21 15:16:09,849 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,39253,1689952540479, splitWal=true, meta=false 2023-07-21 15:16:09,849 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=114, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,36355,1689952536596, splitWal=true, meta=false 2023-07-21 15:16:09,849 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=115, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,38527,1689952536414, splitWal=true, meta=true 2023-07-21 15:16:09,849 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=122, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-21 15:16:09,849 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 17 msec 2023-07-21 15:16:09,849 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 15:16:09,850 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-21 15:16:09,850 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase17.apache.org,44429,1689952559937, table=hbase:meta, region=1588230740 2023-07-21 15:16:09,851 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-21 15:16:09,852 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33615,1689952559656 already deleted, retry=false 2023-07-21 15:16:09,853 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,33615,1689952559656 on jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,853 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=125, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,33615,1689952559656, splitWal=true, meta=false 2023-07-21 15:16:09,853 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=125 for jenkins-hbase17.apache.org,33615,1689952559656 (carryingMeta=false) jenkins-hbase17.apache.org,33615,1689952559656/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6b9d7ee4[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:09,854 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,44429,1689952559937 already deleted, retry=false 2023-07-21 15:16:09,854 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,44429,1689952559937 on jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,854 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,44429,1689952559937, splitWal=true, meta=true 2023-07-21 15:16:09,854 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=126 for jenkins-hbase17.apache.org,44429,1689952559937 (carryingMeta=true) jenkins-hbase17.apache.org,44429,1689952559937/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@d148ca1[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:09,855 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,33915,1689952559786 already deleted, retry=false 2023-07-21 15:16:09,855 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,33915,1689952559786 on jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,855 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=127, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,33915,1689952559786, splitWal=true, meta=false 2023-07-21 15:16:09,856 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=127 for jenkins-hbase17.apache.org,33915,1689952559786 (carryingMeta=false) jenkins-hbase17.apache.org,33915,1689952559786/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@19c0ab5b[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:09,856 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-21 15:16:09,856 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 15:16:09,857 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 15:16:09,857 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 15:16:09,857 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 15:16:09,858 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 15:16:09,859 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:09,859 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:09,859 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:09,859 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:09,859 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:09,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,43821,1689952569195, sessionid=0x1018872b379001c, setting cluster-up flag (Was=false) 2023-07-21 15:16:09,862 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 15:16:09,863 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,865 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 15:16:09,865 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:09,866 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/.hbase-snapshot/.tmp 2023-07-21 15:16:09,869 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 15:16:09,869 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 15:16:09,870 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-21 15:16:09,871 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:09,871 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 15:16:09,872 INFO [master/jenkins-hbase17:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 15:16:09,874 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:09,875 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:44429 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:09,877 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:44429 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 2023-07-21 15:16:09,884 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:16:09,884 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:16:09,884 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 15:16:09,884 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:09,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,887 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689952599887 2023-07-21 15:16:09,887 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 15:16:09,887 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 15:16:09,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 15:16:09,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 15:16:09,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 15:16:09,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 15:16:09,888 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 15:16:09,889 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 15:16:09,889 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase17.apache.org,44429,1689952559937; numProcessing=1 2023-07-21 15:16:09,890 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 15:16:09,890 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33615,1689952559656; numProcessing=2 2023-07-21 15:16:09,890 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=126, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,44429,1689952559937, splitWal=true, meta=true 2023-07-21 15:16:09,890 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=125, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33615,1689952559656, splitWal=true, meta=false 2023-07-21 15:16:09,890 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase17.apache.org,33915,1689952559786; numProcessing=3 2023-07-21 15:16:09,890 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=127, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33915,1689952559786, splitWal=true, meta=false 2023-07-21 15:16:09,891 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=126, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,44429,1689952559937, splitWal=true, meta=true, isMeta: true 2023-07-21 15:16:09,893 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 15:16:09,893 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 15:16:09,894 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937-splitting 2023-07-21 15:16:09,895 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937-splitting dir is empty, no logs to split. 2023-07-21 15:16:09,895 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,44429,1689952559937 WAL count=0, meta=true 2023-07-21 15:16:09,896 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952569893,5,FailOnTimeoutGroup] 2023-07-21 15:16:09,897 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937-splitting dir is empty, no logs to split. 2023-07-21 15:16:09,897 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase17.apache.org,44429,1689952559937 WAL count=0, meta=true 2023-07-21 15:16:09,897 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,44429,1689952559937 WAL splitting is done? wals=0, meta=true 2023-07-21 15:16:09,898 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 15:16:09,899 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952569896,5,FailOnTimeoutGroup] 2023-07-21 15:16:09,900 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,900 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 15:16:09,900 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 15:16:09,900 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,900 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,901 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689952569901, completionTime=-1 2023-07-21 15:16:09,901 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-21 15:16:09,901 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 15:16:09,901 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:09,916 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:09,916 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:09,916 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:09,916 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:09,916 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:09,916 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:09,918 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:09,918 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:09,918 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:09,918 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:09,918 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:09,918 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:09,920 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:09,920 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:09,920 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:09,924 DEBUG [RS:1;jenkins-hbase17:36003] zookeeper.ReadOnlyZKClient(139): Connect 0x660e1ef9 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:09,924 DEBUG [RS:2;jenkins-hbase17:35121] zookeeper.ReadOnlyZKClient(139): Connect 0x4b51046c to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:09,924 DEBUG [RS:0;jenkins-hbase17:41609] zookeeper.ReadOnlyZKClient(139): Connect 0x1b26b8b6 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:09,933 DEBUG [RS:2;jenkins-hbase17:35121] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4596e376, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:09,934 DEBUG [RS:2;jenkins-hbase17:35121] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41a96f1c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:09,934 DEBUG [RS:0;jenkins-hbase17:41609] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15e145a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:09,934 DEBUG [RS:0;jenkins-hbase17:41609] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11ded3c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:09,934 DEBUG [RS:1;jenkins-hbase17:36003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68c9ec48, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:09,934 DEBUG [RS:1;jenkins-hbase17:36003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cf67589, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:09,941 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:35121 2023-07-21 15:16:09,941 INFO [RS:2;jenkins-hbase17:35121] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:09,941 INFO [RS:2;jenkins-hbase17:35121] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:09,941 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:09,942 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:41609 2023-07-21 15:16:09,942 INFO [RS:0;jenkins-hbase17:41609] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:09,942 INFO [RS:0;jenkins-hbase17:41609] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:09,942 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:09,942 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43821,1689952569195 with isa=jenkins-hbase17.apache.org/136.243.18.41:35121, startcode=1689952569592 2023-07-21 15:16:09,942 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43821,1689952569195 with isa=jenkins-hbase17.apache.org/136.243.18.41:41609, startcode=1689952569336 2023-07-21 15:16:09,942 DEBUG [RS:2;jenkins-hbase17:35121] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:09,942 DEBUG [RS:0;jenkins-hbase17:41609] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:09,943 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:36003 2023-07-21 15:16:09,943 INFO [RS:1;jenkins-hbase17:36003] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:09,943 INFO [RS:1;jenkins-hbase17:36003] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:09,943 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:09,943 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43821,1689952569195 with isa=jenkins-hbase17.apache.org/136.243.18.41:36003, startcode=1689952569461 2023-07-21 15:16:09,943 DEBUG [RS:1;jenkins-hbase17:36003] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:09,944 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45845, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:09,944 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:42135, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:09,944 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53711, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:09,945 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43821] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:09,945 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:09,946 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:16:09,946 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43821] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:09,946 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:09,946 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:09,947 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43821] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:09,946 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 15:16:09,947 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:09,947 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:09,947 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:09,947 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 15:16:09,947 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:09,947 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41793 2023-07-21 15:16:09,947 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41793 2023-07-21 15:16:09,947 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:09,947 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:09,947 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41793 2023-07-21 15:16:09,948 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:09,949 DEBUG [RS:2;jenkins-hbase17:35121] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:09,949 DEBUG [RS:1;jenkins-hbase17:36003] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:09,949 WARN [RS:2;jenkins-hbase17:35121] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:09,949 WARN [RS:1;jenkins-hbase17:36003] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:09,949 INFO [RS:2;jenkins-hbase17:35121] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:09,949 INFO [RS:1;jenkins-hbase17:36003] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:09,949 DEBUG [RS:0;jenkins-hbase17:41609] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:09,949 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:09,949 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:09,949 WARN [RS:0;jenkins-hbase17:41609] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:09,949 INFO [RS:0;jenkins-hbase17:41609] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:09,950 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:09,951 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,35121,1689952569592] 2023-07-21 15:16:09,951 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,41609,1689952569336] 2023-07-21 15:16:09,951 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=50ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-21 15:16:09,951 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36003,1689952569461] 2023-07-21 15:16:09,963 DEBUG [RS:1;jenkins-hbase17:36003] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:09,963 DEBUG [RS:1;jenkins-hbase17:36003] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:09,963 DEBUG [RS:1;jenkins-hbase17:36003] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:09,964 DEBUG [RS:1;jenkins-hbase17:36003] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:09,964 INFO [RS:1;jenkins-hbase17:36003] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:09,966 DEBUG [RS:2;jenkins-hbase17:35121] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:09,966 DEBUG [RS:0;jenkins-hbase17:41609] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:09,966 DEBUG [RS:2;jenkins-hbase17:35121] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:09,966 DEBUG [RS:0;jenkins-hbase17:41609] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:09,967 DEBUG [RS:0;jenkins-hbase17:41609] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:09,967 DEBUG [RS:2;jenkins-hbase17:35121] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:09,967 INFO [RS:1;jenkins-hbase17:36003] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:09,967 INFO [RS:1;jenkins-hbase17:36003] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:09,967 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,967 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:09,969 DEBUG [RS:0;jenkins-hbase17:41609] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:09,969 DEBUG [RS:2;jenkins-hbase17:35121] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:09,969 INFO [RS:0;jenkins-hbase17:41609] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:09,969 INFO [RS:2;jenkins-hbase17:35121] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:09,969 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,971 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,972 DEBUG [RS:1;jenkins-hbase17:36003] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,978 INFO [RS:2;jenkins-hbase17:35121] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:09,978 INFO [RS:2;jenkins-hbase17:35121] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:09,978 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,979 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:09,984 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,979 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:44429 this server is in the failed servers list 2023-07-21 15:16:09,984 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,985 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,985 INFO [RS:0;jenkins-hbase17:41609] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:09,985 INFO [RS:0;jenkins-hbase17:41609] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:09,986 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,986 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:09,988 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,988 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,989 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,989 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,989 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,989 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,990 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,990 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,990 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,990 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:09,990 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,990 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,990 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:09,991 DEBUG [RS:2;jenkins-hbase17:35121] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,991 DEBUG [RS:0;jenkins-hbase17:41609] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:09,997 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,998 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,998 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:09,998 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:10,000 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:10,000 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:10,004 INFO [RS:1;jenkins-hbase17:36003] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:10,005 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36003,1689952569461-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:10,015 INFO [RS:0;jenkins-hbase17:41609] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:10,015 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41609,1689952569336-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:10,016 INFO [RS:1;jenkins-hbase17:36003] regionserver.Replication(203): jenkins-hbase17.apache.org,36003,1689952569461 started 2023-07-21 15:16:10,016 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36003,1689952569461, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36003, sessionid=0x1018872b379001e 2023-07-21 15:16:10,016 INFO [RS:2;jenkins-hbase17:35121] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:10,016 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:10,016 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35121,1689952569592-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:10,016 DEBUG [RS:1;jenkins-hbase17:36003] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:10,016 DEBUG [RS:1;jenkins-hbase17:36003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36003,1689952569461' 2023-07-21 15:16:10,016 DEBUG [RS:1;jenkins-hbase17:36003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:10,016 DEBUG [RS:1;jenkins-hbase17:36003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36003,1689952569461' 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:10,017 DEBUG [RS:1;jenkins-hbase17:36003] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:10,017 INFO [RS:1;jenkins-hbase17:36003] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:10,017 INFO [RS:1;jenkins-hbase17:36003] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:10,028 INFO [RS:2;jenkins-hbase17:35121] regionserver.Replication(203): jenkins-hbase17.apache.org,35121,1689952569592 started 2023-07-21 15:16:10,028 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,35121,1689952569592, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:35121, sessionid=0x1018872b379001f 2023-07-21 15:16:10,028 INFO [RS:0;jenkins-hbase17:41609] regionserver.Replication(203): jenkins-hbase17.apache.org,41609,1689952569336 started 2023-07-21 15:16:10,028 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:10,028 DEBUG [RS:2;jenkins-hbase17:35121] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:10,028 DEBUG [RS:2;jenkins-hbase17:35121] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35121,1689952569592' 2023-07-21 15:16:10,028 DEBUG [RS:2;jenkins-hbase17:35121] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:10,028 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,41609,1689952569336, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:41609, sessionid=0x1018872b379001d 2023-07-21 15:16:10,031 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:10,031 DEBUG [RS:0;jenkins-hbase17:41609] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:10,031 DEBUG [RS:0;jenkins-hbase17:41609] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41609,1689952569336' 2023-07-21 15:16:10,031 DEBUG [RS:0;jenkins-hbase17:41609] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:10,032 DEBUG [RS:0;jenkins-hbase17:41609] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:10,032 DEBUG [RS:2;jenkins-hbase17:35121] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:10,033 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:10,033 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:10,033 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:10,033 DEBUG [RS:0;jenkins-hbase17:41609] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:10,033 DEBUG [RS:0;jenkins-hbase17:41609] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41609,1689952569336' 2023-07-21 15:16:10,033 DEBUG [RS:0;jenkins-hbase17:41609] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:10,033 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:10,033 DEBUG [RS:2;jenkins-hbase17:35121] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:10,033 DEBUG [RS:2;jenkins-hbase17:35121] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35121,1689952569592' 2023-07-21 15:16:10,033 DEBUG [RS:2;jenkins-hbase17:35121] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:10,033 DEBUG [RS:0;jenkins-hbase17:41609] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:10,034 DEBUG [RS:2;jenkins-hbase17:35121] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:10,034 DEBUG [RS:0;jenkins-hbase17:41609] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:10,034 INFO [RS:0;jenkins-hbase17:41609] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:10,035 INFO [RS:0;jenkins-hbase17:41609] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:10,035 DEBUG [RS:2;jenkins-hbase17:35121] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:10,035 INFO [RS:2;jenkins-hbase17:35121] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:10,035 INFO [RS:2;jenkins-hbase17:35121] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:10,052 DEBUG [jenkins-hbase17:43821] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:16:10,052 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:10,052 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:10,052 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:10,052 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:10,052 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:10,053 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,41609,1689952569336, state=OPENING 2023-07-21 15:16:10,054 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:10,054 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:10,054 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=128, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,41609,1689952569336}] 2023-07-21 15:16:10,119 INFO [RS:1;jenkins-hbase17:36003] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36003%2C1689952569461, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36003,1689952569461, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:10,137 INFO [RS:2;jenkins-hbase17:35121] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35121%2C1689952569592, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:10,139 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:10,139 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:10,140 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:10,140 INFO [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41609%2C1689952569336, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41609,1689952569336, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:10,145 INFO [RS:1;jenkins-hbase17:36003] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,36003,1689952569461/jenkins-hbase17.apache.org%2C36003%2C1689952569461.1689952570120 2023-07-21 15:16:10,145 DEBUG [RS:1;jenkins-hbase17:36003] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK]] 2023-07-21 15:16:10,154 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:10,155 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:10,155 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:10,157 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:10,157 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:10,157 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:10,158 INFO [RS:2;jenkins-hbase17:35121] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592/jenkins-hbase17.apache.org%2C35121%2C1689952569592.1689952570139 2023-07-21 15:16:10,160 DEBUG [RS:2;jenkins-hbase17:35121] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:10,160 INFO [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41609,1689952569336/jenkins-hbase17.apache.org%2C41609%2C1689952569336.1689952570141 2023-07-21 15:16:10,161 DEBUG [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK]] 2023-07-21 15:16:10,186 WARN [ReadOnlyZKClient-127.0.0.1:62052@0x071edc8b] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 15:16:10,187 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:10,189 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:10,190 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41609] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:57514 deadline: 1689952630189, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:10,210 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:10,213 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:10,216 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57516, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:10,223 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:16:10,223 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:10,224 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41609%2C1689952569336.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41609,1689952569336, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:10,236 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:10,236 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:10,236 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:10,238 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41609,1689952569336/jenkins-hbase17.apache.org%2C41609%2C1689952569336.meta.1689952570225.meta 2023-07-21 15:16:10,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK]] 2023-07-21 15:16:10,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:10,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:10,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:16:10,239 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:16:10,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:16:10,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:10,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:16:10,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:16:10,240 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:16:10,241 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:16:10,241 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:16:10,242 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:16:10,249 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99 2023-07-21 15:16:10,258 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:10,258 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:10,258 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:10,259 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:16:10,260 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:10,260 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:10,261 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:16:10,277 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:16:10,277 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier/c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:16:10,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:10,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:16:10,279 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:16:10,279 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:16:10,280 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:16:10,289 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:16:10,289 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:16:10,295 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/f04365d3a5c54c78abbc1d4b48d634d7 2023-07-21 15:16:10,295 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:10,296 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:16:10,297 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:16:10,299 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:16:10,301 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:16:10,301 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=161; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10951994560, jitterRate=0.01998397707939148}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:16:10,301 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:16:10,302 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=129, masterSystemTime=1689952570210 2023-07-21 15:16:10,307 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:16:10,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:16:10,309 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,41609,1689952569336, state=OPEN 2023-07-21 15:16:10,310 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:10,310 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:10,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=128 2023-07-21 15:16:10,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=128, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,41609,1689952569336 in 256 msec 2023-07-21 15:16:10,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-21 15:16:10,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 415 msec 2023-07-21 15:16:10,509 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:44429 this server is in the failed servers list 2023-07-21 15:16:10,613 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:44429 this server is in the failed servers list 2023-07-21 15:16:10,818 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:44429 this server is in the failed servers list 2023-07-21 15:16:11,124 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:44429 this server is in the failed servers list 2023-07-21 15:16:11,455 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1554ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1504ms 2023-07-21 15:16:11,630 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase17.apache.org/136.243.18.41:44429 this server is in the failed servers list 2023-07-21 15:16:11,957 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:12,634 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:44429 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:12,636 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:44429 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 2023-07-21 15:16:12,958 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3057ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3007ms 2023-07-21 15:16:13,843 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 15:16:14,411 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4510ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-21 15:16:14,411 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 15:16:14,414 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=7697a92683cfac49519e4a4111355983, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,33915,1689952559786, regionLocation=jenkins-hbase17.apache.org,33915,1689952559786, openSeqNum=21 2023-07-21 15:16:14,414 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=e66f96fe3a93ede34be690ff9e55183e, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,44429,1689952559937, regionLocation=jenkins-hbase17.apache.org,44429,1689952559937, openSeqNum=2 2023-07-21 15:16:14,414 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, lastHost=jenkins-hbase17.apache.org,44429,1689952559937, regionLocation=jenkins-hbase17.apache.org,44429,1689952559937, openSeqNum=77 2023-07-21 15:16:14,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 15:16:14,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689952634414 2023-07-21 15:16:14,415 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689952694414 2023-07-21 15:16:14,415 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-21 15:16:14,430 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,44429,1689952559937 had 3 regions 2023-07-21 15:16:14,430 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43821,1689952569195-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:14,430 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43821,1689952569195-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:14,430 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43821,1689952569195-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:14,431 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,33615,1689952559656 had 0 regions 2023-07-21 15:16:14,431 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:43821, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:14,431 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,33915,1689952559786 had 1 regions 2023-07-21 15:16:14,431 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:14,432 WARN [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. is NOT online; state={7697a92683cfac49519e4a4111355983 state=OPEN, ts=1689952574414, server=jenkins-hbase17.apache.org,33915,1689952559786}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-21 15:16:14,433 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=126, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,44429,1689952559937, splitWal=true, meta=true, isMeta: false 2023-07-21 15:16:14,433 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=125, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33615,1689952559656, splitWal=true, meta=false, isMeta: false 2023-07-21 15:16:14,433 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=127, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,33915,1689952559786, splitWal=true, meta=false, isMeta: false 2023-07-21 15:16:14,436 WARN [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase17.apache.org,33915,1689952559786/hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., unknown_server=jenkins-hbase17.apache.org,44429,1689952559937/hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e., unknown_server=jenkins-hbase17.apache.org,44429,1689952559937/hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:14,436 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937-splitting dir is empty, no logs to split. 2023-07-21 15:16:14,436 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,44429,1689952559937 WAL count=0, meta=false 2023-07-21 15:16:14,436 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656-splitting 2023-07-21 15:16:14,437 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656-splitting dir is empty, no logs to split. 2023-07-21 15:16:14,437 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,33615,1689952559656 WAL count=0, meta=false 2023-07-21 15:16:14,437 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786-splitting 2023-07-21 15:16:14,438 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786-splitting dir is empty, no logs to split. 2023-07-21 15:16:14,438 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,33915,1689952559786 WAL count=0, meta=false 2023-07-21 15:16:14,439 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44429,1689952559937-splitting dir is empty, no logs to split. 2023-07-21 15:16:14,439 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase17.apache.org,44429,1689952559937 WAL count=0, meta=false 2023-07-21 15:16:14,439 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,44429,1689952559937 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:14,440 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656-splitting dir is empty, no logs to split. 2023-07-21 15:16:14,440 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,33615,1689952559656 WAL count=0, meta=false 2023-07-21 15:16:14,440 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,33615,1689952559656 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:14,440 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786-splitting dir is empty, no logs to split. 2023-07-21 15:16:14,440 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase17.apache.org,33915,1689952559786 WAL count=0, meta=false 2023-07-21 15:16:14,440 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,33915,1689952559786 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:14,441 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN}, {pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN}] 2023-07-21 15:16:14,442 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN 2023-07-21 15:16:14,442 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN 2023-07-21 15:16:14,442 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,33915,1689952559786 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33915,1689952559786-splitting does not exist. 2023-07-21 15:16:14,442 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,33615,1689952559656 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,33615,1689952559656-splitting does not exist. 2023-07-21 15:16:14,444 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:14,444 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:14,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=127, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN}] 2023-07-21 15:16:14,444 DEBUG [jenkins-hbase17:43821] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:16:14,445 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:14,445 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:14,445 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:14,445 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:14,445 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-21 15:16:14,448 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,33615,1689952559656 after splitting done 2023-07-21 15:16:14,448 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:14,448 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase17.apache.org,33615,1689952559656 from processing; numProcessing=2 2023-07-21 15:16:14,448 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=127, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN 2023-07-21 15:16:14,448 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:14,448 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952574448"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952574448"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952574448"}]},"ts":"1689952574448"} 2023-07-21 15:16:14,448 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952574448"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952574448"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952574448"}]},"ts":"1689952574448"} 2023-07-21 15:16:14,450 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=127, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-21 15:16:14,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=130, state=RUNNABLE; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,35121,1689952569592}] 2023-07-21 15:16:14,451 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,33615,1689952559656, splitWal=true, meta=false in 4.5950 sec 2023-07-21 15:16:14,452 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=131, state=RUNNABLE; OpenRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,41609,1689952569336}] 2023-07-21 15:16:14,600 DEBUG [jenkins-hbase17:43821] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 15:16:14,601 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-21 15:16:14,601 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 15:16:14,601 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 15:16:14,601 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 15:16:14,601 DEBUG [jenkins-hbase17:43821] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 15:16:14,602 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:14,602 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952574602"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952574602"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952574602"}]},"ts":"1689952574602"} 2023-07-21 15:16:14,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=132, state=RUNNABLE; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,36003,1689952569461}] 2023-07-21 15:16:14,605 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:14,606 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:14,607 INFO [RS-EventLoopGroup-16-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52896, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:14,621 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:14,621 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e66f96fe3a93ede34be690ff9e55183e, NAME => 'hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,622 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 603dc738ccec189e3bde34ff84c46389, NAME => 'hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:14,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. service=MultiRowMutationService 2023-07-21 15:16:14,623 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:16:14,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:14,623 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,624 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/q 2023-07-21 15:16:14,624 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/q 2023-07-21 15:16:14,624 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,624 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e66f96fe3a93ede34be690ff9e55183e columnFamilyName q 2023-07-21 15:16:14,625 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(310): Store=e66f96fe3a93ede34be690ff9e55183e/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:14,625 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:16:14,625 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:16:14,625 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,625 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 603dc738ccec189e3bde34ff84c46389 columnFamilyName m 2023-07-21 15:16:14,626 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/u 2023-07-21 15:16:14,626 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/u 2023-07-21 15:16:14,626 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e66f96fe3a93ede34be690ff9e55183e columnFamilyName u 2023-07-21 15:16:14,627 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(310): Store=e66f96fe3a93ede34be690ff9e55183e/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:14,627 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,628 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 15:16:14,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:14,632 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:16:14,632 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:16:14,632 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened e66f96fe3a93ede34be690ff9e55183e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9951206880, jitterRate=-0.07322163879871368}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 15:16:14,632 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:14,633 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e., pid=134, masterSystemTime=1689952574608 2023-07-21 15:16:14,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:14,635 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:14,636 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:14,636 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952574636"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952574636"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952574636"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952574636"}]},"ts":"1689952574636"} 2023-07-21 15:16:14,638 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2 2023-07-21 15:16:14,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=131 2023-07-21 15:16:14,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=131, state=SUCCESS; OpenRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,41609,1689952569336 in 185 msec 2023-07-21 15:16:14,640 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, ASSIGN in 198 msec 2023-07-21 15:16:14,643 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:16:14,643 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:16:14,643 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(310): Store=603dc738ccec189e3bde34ff84c46389/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:14,644 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,646 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:44429 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:14,647 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:44429 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 2023-07-21 15:16:14,647 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4148 ms ago, cancelled=false, msg=Call to address=jenkins-hbase17.apache.org/136.243.18.41:44429 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., hostname=jenkins-hbase17.apache.org,44429,1689952559937, seqNum=77, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase17.apache.org/136.243.18.41:44429 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:44429 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:14,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:14,650 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 603dc738ccec189e3bde34ff84c46389; next sequenceid=84; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@72da17a3, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:14,650 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:14,651 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., pid=133, masterSystemTime=1689952574605 2023-07-21 15:16:14,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 15:16:14,655 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 15:16:14,656 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:14,657 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:14,657 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, openSeqNum=84, regionLocation=jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:14,657 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952574657"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952574657"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952574657"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952574657"}]},"ts":"1689952574657"} 2023-07-21 15:16:14,659 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16162 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 15:16:14,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=130 2023-07-21 15:16:14,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=130, state=SUCCESS; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,35121,1689952569592 in 208 msec 2023-07-21 15:16:14,662 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=126 2023-07-21 15:16:14,662 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,44429,1689952559937 after splitting done 2023-07-21 15:16:14,662 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.HStore(1912): 603dc738ccec189e3bde34ff84c46389/m is initiating minor compaction (all files) 2023-07-21 15:16:14,662 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, ASSIGN in 219 msec 2023-07-21 15:16:14,662 INFO [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 603dc738ccec189e3bde34ff84c46389/m in hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:14,662 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase17.apache.org,44429,1689952559937 from processing; numProcessing=1 2023-07-21 15:16:14,662 INFO [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2] into tmpdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp, totalSize=15.8 K 2023-07-21 15:16:14,663 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] compactions.Compactor(207): Compacting 6ca2192a296d47859e18b9a84011d90b, keycount=10, bloomtype=ROW, size=5.4 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1689952548252 2023-07-21 15:16:14,663 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,44429,1689952559937, splitWal=true, meta=true in 4.8080 sec 2023-07-21 15:16:14,664 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] compactions.Compactor(207): Compacting e9bcd7bb10a04f6bbcfbde3e28e08f7b, keycount=14, bloomtype=ROW, size=5.5 K, encoding=NONE, compression=NONE, seqNum=73, earliestPutTs=1689952556576 2023-07-21 15:16:14,664 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] compactions.Compactor(207): Compacting a9f6458d40a74267b626b8d3db4c94e2, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1689952565330 2023-07-21 15:16:14,683 INFO [RS:2;jenkins-hbase17:35121-shortCompactions-0] throttle.PressureAwareThroughputController(145): 603dc738ccec189e3bde34ff84c46389#m#compaction#12 average throughput is 0.22 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 15:16:14,715 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/fb958306d2ea4ca9816d37f319cf9f17 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/fb958306d2ea4ca9816d37f319cf9f17 2023-07-21 15:16:14,730 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:16:14,731 INFO [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 603dc738ccec189e3bde34ff84c46389/m of 603dc738ccec189e3bde34ff84c46389 into fb958306d2ea4ca9816d37f319cf9f17(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 15:16:14,731 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:14,732 INFO [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., storeName=603dc738ccec189e3bde34ff84c46389/m, priority=13, startTime=1689952574652; duration=0sec 2023-07-21 15:16:14,732 DEBUG [RS:2;jenkins-hbase17:35121-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 15:16:14,759 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:14,759 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:14,761 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:14,765 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:14,765 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7697a92683cfac49519e4a4111355983, NAME => 'hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:14,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:14,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,767 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,768 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:16:14,768 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:16:14,769 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7697a92683cfac49519e4a4111355983 columnFamilyName info 2023-07-21 15:16:14,776 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:16:14,776 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:16:14,781 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/f7f6dd522e854d8fab91aaec79abb8df 2023-07-21 15:16:14,781 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(310): Store=7697a92683cfac49519e4a4111355983/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:14,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,783 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:14,786 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7697a92683cfac49519e4a4111355983; next sequenceid=24; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11032967680, jitterRate=0.02752518653869629}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:14,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:16:14,787 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., pid=135, masterSystemTime=1689952574759 2023-07-21 15:16:14,791 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:14,792 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:14,792 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPEN, openSeqNum=24, regionLocation=jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:14,792 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952574792"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952574792"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952574792"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952574792"}]},"ts":"1689952574792"} 2023-07-21 15:16:14,795 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=132 2023-07-21 15:16:14,795 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,36003,1689952569461 in 190 msec 2023-07-21 15:16:14,797 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=127 2023-07-21 15:16:14,797 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,33915,1689952559786 after splitting done 2023-07-21 15:16:14,797 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=127, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, ASSIGN in 351 msec 2023-07-21 15:16:14,797 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase17.apache.org,33915,1689952559786 from processing; numProcessing=0 2023-07-21 15:16:14,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,33915,1689952559786, splitWal=true, meta=false in 4.9420 sec 2023-07-21 15:16:15,433 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-21 15:16:15,442 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:15,461 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:15,485 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 15:16:15,486 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 15:16:15,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.760sec 2023-07-21 15:16:15,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 15:16:15,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 15:16:15,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 15:16:15,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43821,1689952569195-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 15:16:15,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43821,1689952569195-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 15:16:15,487 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 15:16:15,521 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x0c810d6f to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:15,528 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37d9ff22, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:15,530 DEBUG [hconnection-0x1ae2d388-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:15,531 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57528, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:15,538 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-21 15:16:15,538 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0c810d6f to 127.0.0.1:62052 2023-07-21 15:16:15,538 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:15,540 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase17.apache.org:43821 after: jenkins-hbase17.apache.org:43821 2023-07-21 15:16:15,540 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x005f37fb to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:15,545 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c2ddaeb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:15,546 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:15,964 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 15:16:15,969 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 15:16:15,969 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-21 15:16:18,699 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:18,700 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37864, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:18,703 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 15:16:18,703 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 15:16:18,715 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:18,716 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:18,716 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:18,717 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-21 15:16:18,717 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 15:16:18,750 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 15:16:18,754 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 15:16:18,756 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 15:16:18,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(492): Client=jenkins//136.243.18.41 set balanceSwitch=false 2023-07-21 15:16:18,758 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(139): Connect 0x3b4ef0b0 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:18,769 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c379e28, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:18,769 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:18,771 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:18,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018872b3790027 connected 2023-07-21 15:16:18,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:18,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:18,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:18,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:18,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:18,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:18,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:18,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:18,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:18,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:18,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:18,796 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:18,808 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:18,816 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:44851 2023-07-21 15:16:18,817 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:18,819 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:18,820 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:18,821 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:18,822 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44851 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:18,828 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:448510x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:18,830 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:448510x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:18,830 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:448510x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 15:16:18,831 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:448510x0, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:18,833 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44851-0x1018872b3790028 connected 2023-07-21 15:16:18,833 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44851 2023-07-21 15:16:18,834 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44851 2023-07-21 15:16:18,834 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44851 2023-07-21 15:16:18,841 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44851 2023-07-21 15:16:18,843 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44851 2023-07-21 15:16:18,847 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:18,847 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:18,847 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:18,848 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:18,848 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:18,848 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:18,848 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:18,849 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 41211 2023-07-21 15:16:18,849 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:18,860 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:18,861 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@257ebbb2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:18,861 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:18,861 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46bd3e7f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:18,984 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:18,984 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:18,985 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:18,985 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:16:18,986 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:18,986 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@261e4b91{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-41211-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2386475709087048052/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:18,988 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@6f180e2b{HTTP/1.1, (http/1.1)}{0.0.0.0:41211} 2023-07-21 15:16:18,988 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @51799ms 2023-07-21 15:16:18,992 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:18,992 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:18,993 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:18,993 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:18,995 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:18,996 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ReadOnlyZKClient(139): Connect 0x5cc30725 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:19,000 DEBUG [RS:3;jenkins-hbase17:44851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5204f2cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:19,000 DEBUG [RS:3;jenkins-hbase17:44851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@563b6c55, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:19,009 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase17:44851 2023-07-21 15:16:19,009 INFO [RS:3;jenkins-hbase17:44851] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:19,009 INFO [RS:3;jenkins-hbase17:44851] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:19,009 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:19,010 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43821,1689952569195 with isa=jenkins-hbase17.apache.org/136.243.18.41:44851, startcode=1689952578807 2023-07-21 15:16:19,010 DEBUG [RS:3;jenkins-hbase17:44851] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:19,012 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:58911, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:19,012 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43821] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,012 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:19,012 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:19,012 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:19,013 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41793 2023-07-21 15:16:19,013 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:19,013 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:19,014 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:19,013 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:19,014 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,014 WARN [RS:3;jenkins-hbase17:44851] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:19,014 INFO [RS:3;jenkins-hbase17:44851] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:19,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:19,014 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:19,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:19,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:19,015 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:19,015 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,44851,1689952578807] 2023-07-21 15:16:19,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:19,020 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:19,021 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:19,021 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:16:19,021 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:19,021 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,021 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:19,024 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 15:16:19,024 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,024 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,025 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:19,025 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:19,025 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:19,026 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,026 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:19,026 INFO [RS:3;jenkins-hbase17:44851] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:19,027 INFO [RS:3;jenkins-hbase17:44851] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:19,028 INFO [RS:3;jenkins-hbase17:44851] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:19,028 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:19,028 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:19,030 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:19,031 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,031 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,031 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,031 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,031 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,032 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:19,032 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,032 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,032 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,032 DEBUG [RS:3;jenkins-hbase17:44851] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:19,034 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:19,034 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:19,035 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:19,045 INFO [RS:3;jenkins-hbase17:44851] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:19,045 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,44851,1689952578807-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:19,056 INFO [RS:3;jenkins-hbase17:44851] regionserver.Replication(203): jenkins-hbase17.apache.org,44851,1689952578807 started 2023-07-21 15:16:19,056 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,44851,1689952578807, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:44851, sessionid=0x1018872b3790028 2023-07-21 15:16:19,056 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:19,056 DEBUG [RS:3;jenkins-hbase17:44851] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,056 DEBUG [RS:3;jenkins-hbase17:44851] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,44851,1689952578807' 2023-07-21 15:16:19,056 DEBUG [RS:3;jenkins-hbase17:44851] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:19,056 DEBUG [RS:3;jenkins-hbase17:44851] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:19,057 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:19,057 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:19,057 DEBUG [RS:3;jenkins-hbase17:44851] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,057 DEBUG [RS:3;jenkins-hbase17:44851] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,44851,1689952578807' 2023-07-21 15:16:19,057 DEBUG [RS:3;jenkins-hbase17:44851] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:19,057 DEBUG [RS:3;jenkins-hbase17:44851] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:19,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:19,058 DEBUG [RS:3;jenkins-hbase17:44851] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:19,058 INFO [RS:3;jenkins-hbase17:44851] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:19,059 INFO [RS:3;jenkins-hbase17:44851] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:19,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:19,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:19,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:19,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:19,076 DEBUG [hconnection-0x52e8eba-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:19,082 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52768, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:19,091 DEBUG [hconnection-0x52e8eba-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:19,092 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37878, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:19,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:19,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:19,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43821] to rsgroup master 2023-07-21 15:16:19,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:19,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:45536 deadline: 1689953779098, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. 2023-07-21 15:16:19,099 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:19,101 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:19,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:19,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:19,103 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:35121, jenkins-hbase17.apache.org:36003, jenkins-hbase17.apache.org:41609, jenkins-hbase17.apache.org:44851], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:19,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:19,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:19,168 INFO [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C44851%2C1689952578807, suffix=, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44851,1689952578807, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:19,175 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=554 (was 514) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1924435486_17 at /127.0.0.1:49264 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741892_1068] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp930954381-1680 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:44851-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1943811146) connection to localhost.localdomain/127.0.0.1:37247 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x52e8eba-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1978350546-1753 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1978350546-1750 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x5cc30725-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1943811146) connection to localhost.localdomain/127.0.0.1:37247 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: M:0;jenkins-hbase17:43821 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x66b5f915-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1978350546-1749 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase17:36003-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-488029319_17 at /127.0.0.1:38732 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1707 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x52e8eba-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x005f37fb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:2;jenkins-hbase17:35121-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1594301014_17 at /127.0.0.1:39530 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp930954381-1684 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase17:44851 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: qtp1340935856-1744 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x1b26b8b6-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp930954381-1681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43113,1689952559498 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x1b26b8b6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-646185821_17 at /127.0.0.1:49260 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741891_1067] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3-prefix:jenkins-hbase17.apache.org,36003,1689952569461 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1657426390-1650 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-488029319_17 at /127.0.0.1:39554 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:35121Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1924435486_17 at /127.0.0.1:39524 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741892_1068] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741893_1069, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1712 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x3b4ef0b0-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1983 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase17:35121 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x1b26b8b6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1401117531-1989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x071edc8b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x660e1ef9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x3b4ef0b0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3-prefix:jenkins-hbase17.apache.org,41609,1689952569336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1340935856-1740 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-57009968-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741892_1068, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x4b51046c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1943811146) connection to localhost.localdomain/127.0.0.1:37247 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1984-acceptor-0@64935358-ServerConnector@6f180e2b{HTTP/1.1, (http/1.1)}{0.0.0.0:41211} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp930954381-1682 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741891_1067, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1657426390-1652 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x5cc30725-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp930954381-1683 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6d18c956-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x005f37fb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:36003Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1340935856-1742 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:44851Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase17:36003 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase17:41609 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1594301014_17 at /127.0.0.1:38712 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1988 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1986 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1978350546-1748 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1594301014_17 at /127.0.0.1:49270 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741893_1069] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-646185821_17 at /127.0.0.1:39508 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741891_1067] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6bb14c6-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x66b5f915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x66b5f915-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1657426390-1649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1d7dc142-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741892_1068, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952569896 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x5cc30725 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x071edc8b-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1978350546-1755 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1710 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1465067620_17 at /127.0.0.1:56308 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp930954381-1677 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase17:35121-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1978350546-1754 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1978350546-1752-acceptor-0@7b5732af-ServerConnector@7b2ccb1f{HTTP/1.1, (http/1.1)}{0.0.0.0:33921} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x660e1ef9-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x3b4ef0b0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1657426390-1647-acceptor-0@6fb7cf3c-ServerConnector@58b12215{HTTP/1.1, (http/1.1)}{0.0.0.0:41793} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1985 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3-prefix:jenkins-hbase17.apache.org,41609,1689952569336.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401117531-1990 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp930954381-1678-acceptor-0@4f72ba68-ServerConnector@2b376f34{HTTP/1.1, (http/1.1)}{0.0.0.0:46667} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1657426390-1648 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1657426390-1651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1340935856-1741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase17:41609-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1713 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741891_1067, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-646185821_17 at /127.0.0.1:38706 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741891_1067] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x660e1ef9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1340935856-1743 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-488029319_17 at /127.0.0.1:49278 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6bb14c6-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-1f934370-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6bb14c6-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1714 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1978350546-1751 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1657426390-1646 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6bb14c6-metaLookup-shared--pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase17:41609Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3-prefix:jenkins-hbase17.apache.org,35121,1689952569592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x4b51046c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35121 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741891_1067, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44851 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1340935856-1739 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x005f37fb-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952569893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7710fd8f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x071edc8b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$92/847231106.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1708-acceptor-0@10cb2c7d-ServerConnector@3975476a{HTTP/1.1, (http/1.1)}{0.0.0.0:40299} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-488029319_17 at /127.0.0.1:49276 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp212825116-1711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-488029319_17 at /127.0.0.1:38726 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1340935856-1738-acceptor-0@6325de8d-ServerConnector@61d3f6e5{HTTP/1.1, (http/1.1)}{0.0.0.0:34541} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1943811146) connection to localhost.localdomain/127.0.0.1:37247 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost.localdomain:37247 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-642756276-136.243.18.41-1689952529515:blk_1073741892_1068, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp930954381-1679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1924435486_17 at /127.0.0.1:38710 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741892_1068] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62052@0x4b51046c-SendThread(127.0.0.1:62052) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-488029319_17 at /127.0.0.1:39538 [Receiving block BP-642756276-136.243.18.41-1689952529515:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1657426390-1653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData-prefix:jenkins-hbase17.apache.org,43821,1689952569195 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1340935856-1737 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1162954144.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=843 (was 765) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=697 (was 780), ProcessCount=186 (was 186), AvailableMemoryMB=1841 (was 1809) - AvailableMemoryMB LEAK? - 2023-07-21 15:16:19,178 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-21 15:16:19,206 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:19,207 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=554, OpenFileDescriptor=843, MaxFileDescriptor=60000, SystemLoadAverage=697, ProcessCount=189, AvailableMemoryMB=1834 2023-07-21 15:16:19,207 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-21 15:16:19,207 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-21 15:16:19,213 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:19,221 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:19,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:19,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:19,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:19,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:19,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:19,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:19,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:19,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:19,249 INFO [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44851,1689952578807/jenkins-hbase17.apache.org%2C44851%2C1689952578807.1689952579169 2023-07-21 15:16:19,256 DEBUG [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:19,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:19,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:19,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:19,271 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 15:16:19,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:19,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:19,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:19,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:19,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:19,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:19,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:19,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43821] to rsgroup master 2023-07-21 15:16:19,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:19,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:45536 deadline: 1689953779299, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. 2023-07-21 15:16:19,301 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:19,303 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:19,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:19,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:19,309 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:35121, jenkins-hbase17.apache.org:36003, jenkins-hbase17.apache.org:41609, jenkins-hbase17.apache.org:44851], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:19,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:19,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:19,311 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-21 15:16:19,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:19,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:19,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:19,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:19,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:19,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:19,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:19,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:19,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41609, jenkins-hbase17.apache.org:36003, jenkins-hbase17.apache.org:35121] to rsgroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:19,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:19,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:19,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(238): Moving server region 603dc738ccec189e3bde34ff84c46389, which do not belong to RSGroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] procedure2.ProcedureExecutor(1029): Stored pid=136, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE 2023-07-21 15:16:19,339 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE 2023-07-21 15:16:19,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(238): Moving server region 7697a92683cfac49519e4a4111355983, which do not belong to RSGroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] procedure2.ProcedureExecutor(1029): Stored pid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE 2023-07-21 15:16:19,340 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:19,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(238): Moving server region e66f96fe3a93ede34be690ff9e55183e, which do not belong to RSGroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,340 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE 2023-07-21 15:16:19,341 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952579340"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952579340"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952579340"}]},"ts":"1689952579340"} 2023-07-21 15:16:19,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] procedure2.ProcedureExecutor(1029): Stored pid=138, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, REOPEN/MOVE 2023-07-21 15:16:19,341 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:19,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:19,341 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, REOPEN/MOVE 2023-07-21 15:16:19,342 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952579341"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952579341"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952579341"}]},"ts":"1689952579341"} 2023-07-21 15:16:19,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] procedure2.ProcedureExecutor(1029): Stored pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 15:16:19,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:19,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(286): Moving 4 region(s) to group default, current retry=0 2023-07-21 15:16:19,342 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 15:16:19,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952579342"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952579342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952579342"}]},"ts":"1689952579342"} 2023-07-21 15:16:19,343 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,41609,1689952569336, state=CLOSING 2023-07-21 15:16:19,344 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:19,344 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=139, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,41609,1689952569336}] 2023-07-21 15:16:19,344 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:19,348 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; CloseRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,35121,1689952569592}] 2023-07-21 15:16:19,349 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 15:16:19,355 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=141, ppid=136, state=RUNNABLE; CloseRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:19,356 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; CloseRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,36003,1689952569461}] 2023-07-21 15:16:19,362 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=142, ppid=137, state=RUNNABLE; CloseRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:19,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; CloseRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,41609,1689952569336}] 2023-07-21 15:16:19,364 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=143, ppid=138, state=RUNNABLE; CloseRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:19,430 DEBUG [Finalizer] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66b5f915 to 127.0.0.1:62052 2023-07-21 15:16:19,431 DEBUG [Finalizer] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:19,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 15:16:19,504 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:16:19,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:16:19,504 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:16:19,504 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:16:19,504 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:16:19,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.83 KB heapSize=7 KB 2023-07-21 15:16:19,520 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.83 KB at sequenceid=172 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/332f83b68a1a40e2b33e4b32f5c50f64 2023-07-21 15:16:19,534 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/332f83b68a1a40e2b33e4b32f5c50f64 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64 2023-07-21 15:16:19,542 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64, entries=33, sequenceid=172, filesize=8.6 K 2023-07-21 15:16:19,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.83 KB/3921, heapSize ~6.48 KB/6640, currentSize=0 B/0 for 1588230740 in 39ms, sequenceid=172, compaction requested=true 2023-07-21 15:16:19,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/recovered.edits/175.seqid, newMaxSeqId=175, maxSeqId=160 2023-07-21 15:16:19,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:19,571 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:19,571 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:16:19,571 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase17.apache.org,44851,1689952578807 record at close sequenceid=172 2023-07-21 15:16:19,585 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 15:16:19,585 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 15:16:19,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=139 2023-07-21 15:16:19,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=139, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,41609,1689952569336 in 241 msec 2023-07-21 15:16:19,588 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,44851,1689952578807; forceNewPlan=false, retain=false 2023-07-21 15:16:19,738 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,44851,1689952578807, state=OPENING 2023-07-21 15:16:19,739 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:19,740 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,44851,1689952578807}] 2023-07-21 15:16:19,740 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:19,894 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:19,894 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:19,895 INFO [RS-EventLoopGroup-17-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:19,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 15:16:19,901 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:19,903 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C44851%2C1689952578807.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44851,1689952578807, archiveDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs, maxLogs=32 2023-07-21 15:16:19,917 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK] 2023-07-21 15:16:19,917 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK] 2023-07-21 15:16:19,917 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK] 2023-07-21 15:16:19,918 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,44851,1689952578807/jenkins-hbase17.apache.org%2C44851%2C1689952578807.meta.1689952579904.meta 2023-07-21 15:16:19,918 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35415,DS-ec97d673-8164-46e0-a29f-1cd213b16f56,DISK], DatanodeInfoWithStorage[127.0.0.1:46483,DS-3c205c17-2c52-402b-866d-d32f13caa455,DISK], DatanodeInfoWithStorage[127.0.0.1:36409,DS-779658b6-4e98-4970-b3d3-fb613cb8802e,DISK]] 2023-07-21 15:16:19,918 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:19,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:19,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 15:16:19,919 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 15:16:19,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 15:16:19,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:19,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 15:16:19,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 15:16:19,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 15:16:19,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:16:19,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info 2023-07-21 15:16:19,926 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 15:16:19,935 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99 2023-07-21 15:16:19,939 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64 2023-07-21 15:16:19,945 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:19,945 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:19,945 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:19,946 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 15:16:19,947 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:19,947 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier 2023-07-21 15:16:19,947 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 15:16:19,954 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:16:19,954 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/rep_barrier/c7e6e1836f7f4098a404b796a61af07f 2023-07-21 15:16:19,954 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:19,954 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 15:16:19,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:16:19,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table 2023-07-21 15:16:19,956 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 15:16:19,962 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:16:19,962 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/916df231b8fb48db908a7ebc1b240c3d 2023-07-21 15:16:19,971 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/table/f04365d3a5c54c78abbc1d4b48d634d7 2023-07-21 15:16:19,971 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:19,973 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:16:19,974 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740 2023-07-21 15:16:19,976 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 15:16:19,978 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 15:16:19,979 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=176; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11386089440, jitterRate=0.0604122132062912}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 15:16:19,979 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 15:16:19,979 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=144, masterSystemTime=1689952579894 2023-07-21 15:16:19,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 15:16:19,981 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 15:16:19,988 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 15:16:19,989 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 15:16:19,989 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 28681 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 15:16:19,989 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,44851,1689952578807, state=OPEN 2023-07-21 15:16:19,989 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-21 15:16:19,989 INFO [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-21 15:16:19,990 INFO [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64] into tmpdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp, totalSize=28.0 K 2023-07-21 15:16:19,990 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 15:16:19,990 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 15:16:19,991 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] compactions.Compactor(207): Compacting 61fcafcc9c244e3eb1f1f966564d855c, keycount=62, bloomtype=NONE, size=11.7 K, encoding=NONE, compression=NONE, seqNum=142, earliestPutTs=1689952539557 2023-07-21 15:16:19,992 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] compactions.Compactor(207): Compacting 018c0ea790dd452bbb94d051c83f4c99, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=157, earliestPutTs=1689952565093 2023-07-21 15:16:19,992 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] compactions.Compactor(207): Compacting 332f83b68a1a40e2b33e4b32f5c50f64, keycount=33, bloomtype=NONE, size=8.6 K, encoding=NONE, compression=NONE, seqNum=172, earliestPutTs=1689952574448 2023-07-21 15:16:19,996 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-21 15:16:19,996 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,44851,1689952578807 in 251 msec 2023-07-21 15:16:19,998 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 654 msec 2023-07-21 15:16:20,008 INFO [RS:3;jenkins-hbase17:44851-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#14 average throughput is 5.76 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 15:16:20,031 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/25394ed2adb044bf80a2b52822f16f18 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/25394ed2adb044bf80a2b52822f16f18 2023-07-21 15:16:20,042 INFO [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 25394ed2adb044bf80a2b52822f16f18(size=10.8 K), total size for store is 10.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 15:16:20,042 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-21 15:16:20,042 INFO [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1689952579980; duration=0sec 2023-07-21 15:16:20,043 DEBUG [RS:3;jenkins-hbase17:44851-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 15:16:20,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7697a92683cfac49519e4a4111355983, disabling compactions & flushes 2023-07-21 15:16:20,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing e66f96fe3a93ede34be690ff9e55183e, disabling compactions & flushes 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. after waiting 0 ms 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 603dc738ccec189e3bde34ff84c46389, disabling compactions & flushes 2023-07-21 15:16:20,148 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,148 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. after waiting 0 ms 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. after waiting 0 ms 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 603dc738ccec189e3bde34ff84c46389 1/1 column families, dataSize=2.25 KB heapSize=3.77 KB 2023-07-21 15:16:20,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/recovered.edits/26.seqid, newMaxSeqId=26, maxSeqId=23 2023-07-21 15:16:20,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 15:16:20,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:16:20,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 7697a92683cfac49519e4a4111355983 move to jenkins-hbase17.apache.org,44851,1689952578807 record at close sequenceid=24 2023-07-21 15:16:20,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:20,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding e66f96fe3a93ede34be690ff9e55183e move to jenkins-hbase17.apache.org,44851,1689952578807 record at close sequenceid=5 2023-07-21 15:16:20,182 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,184 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=CLOSED 2023-07-21 15:16:20,184 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952580184"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952580184"}]},"ts":"1689952580184"} 2023-07-21 15:16:20,184 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41609] ipc.CallRunner(144): callId: 67 service: ClientService methodName: Mutate size: 209 connection: 136.243.18.41:57514 deadline: 1689952640184, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=44851 startCode=1689952578807. As of locationSeqNum=172. 2023-07-21 15:16:20,190 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=CLOSED 2023-07-21 15:16:20,190 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952580190"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952580190"}]},"ts":"1689952580190"} 2023-07-21 15:16:20,190 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:20,192 INFO [RS-EventLoopGroup-17-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38936, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:20,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-21 15:16:20,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; CloseRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,36003,1689952569461 in 837 msec 2023-07-21 15:16:20,195 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,44851,1689952578807; forceNewPlan=false, retain=false 2023-07-21 15:16:20,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.25 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/c767688321414da480d353fe61269aff 2023-07-21 15:16:20,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c767688321414da480d353fe61269aff 2023-07-21 15:16:20,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/c767688321414da480d353fe61269aff as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/c767688321414da480d353fe61269aff 2023-07-21 15:16:20,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c767688321414da480d353fe61269aff 2023-07-21 15:16:20,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/c767688321414da480d353fe61269aff, entries=5, sequenceid=95, filesize=5.3 K 2023-07-21 15:16:20,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.25 KB/2306, heapSize ~3.76 KB/3848, currentSize=0 B/0 for 603dc738ccec189e3bde34ff84c46389 in 74ms, sequenceid=95, compaction requested=false 2023-07-21 15:16:20,230 DEBUG [StoreCloser-hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2] to archive 2023-07-21 15:16:20,231 DEBUG [StoreCloser-hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 15:16:20,234 DEBUG [StoreCloser-hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/6ca2192a296d47859e18b9a84011d90b 2023-07-21 15:16:20,236 DEBUG [StoreCloser-hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/e9bcd7bb10a04f6bbcfbde3e28e08f7b 2023-07-21 15:16:20,238 DEBUG [StoreCloser-hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/a9f6458d40a74267b626b8d3db4c94e2 2023-07-21 15:16:20,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=83 2023-07-21 15:16:20,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:20,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:20,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(3513): Adding 603dc738ccec189e3bde34ff84c46389 move to jenkins-hbase17.apache.org,44851,1689952578807 record at close sequenceid=95 2023-07-21 15:16:20,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,284 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=CLOSED 2023-07-21 15:16:20,284 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952580284"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689952580284"}]},"ts":"1689952580284"} 2023-07-21 15:16:20,295 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-21 15:16:20,295 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; CloseRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,35121,1689952569592 in 940 msec 2023-07-21 15:16:20,297 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=136, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,44851,1689952578807; forceNewPlan=false, retain=false 2023-07-21 15:16:20,298 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:20,298 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952580298"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952580298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952580298"}]},"ts":"1689952580298"} 2023-07-21 15:16:20,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-21 15:16:20,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; CloseRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,41609,1689952569336 in 932 msec 2023-07-21 15:16:20,298 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:20,299 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952580298"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952580298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952580298"}]},"ts":"1689952580298"} 2023-07-21 15:16:20,299 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase17.apache.org,44851,1689952578807; forceNewPlan=false, retain=false 2023-07-21 15:16:20,299 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=137, state=RUNNABLE; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,44851,1689952578807}] 2023-07-21 15:16:20,300 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=136, state=RUNNABLE; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,44851,1689952578807}] 2023-07-21 15:16:20,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] procedure.ProcedureSyncWait(216): waitFor pid=136 2023-07-21 15:16:20,450 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:20,450 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952580450"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689952580450"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689952580450"}]},"ts":"1689952580450"} 2023-07-21 15:16:20,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=138, state=RUNNABLE; OpenRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,44851,1689952578807}] 2023-07-21 15:16:20,455 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 603dc738ccec189e3bde34ff84c46389, NAME => 'hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. service=MultiRowMutationService 2023-07-21 15:16:20,455 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,457 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,458 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:16:20,458 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m 2023-07-21 15:16:20,458 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 603dc738ccec189e3bde34ff84c46389 columnFamilyName m 2023-07-21 15:16:20,465 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c767688321414da480d353fe61269aff 2023-07-21 15:16:20,465 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/c767688321414da480d353fe61269aff 2023-07-21 15:16:20,470 DEBUG [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/fb958306d2ea4ca9816d37f319cf9f17 2023-07-21 15:16:20,470 INFO [StoreOpener-603dc738ccec189e3bde34ff84c46389-1] regionserver.HStore(310): Store=603dc738ccec189e3bde34ff84c46389/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:20,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:20,476 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 603dc738ccec189e3bde34ff84c46389; next sequenceid=99; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@57350d2d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:20,477 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:20,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389., pid=146, masterSystemTime=1689952580451 2023-07-21 15:16:20,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:20,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7697a92683cfac49519e4a4111355983, NAME => 'hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:20,483 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=603dc738ccec189e3bde34ff84c46389, regionState=OPEN, openSeqNum=99, regionLocation=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:20,484 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689952580483"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952580483"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952580483"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952580483"}]},"ts":"1689952580483"} 2023-07-21 15:16:20,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:20,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,486 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,488 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:16:20,488 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info 2023-07-21 15:16:20,489 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7697a92683cfac49519e4a4111355983 columnFamilyName info 2023-07-21 15:16:20,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=136 2023-07-21 15:16:20,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=136, state=SUCCESS; OpenRegionProcedure 603dc738ccec189e3bde34ff84c46389, server=jenkins-hbase17.apache.org,44851,1689952578807 in 186 msec 2023-07-21 15:16:20,491 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=603dc738ccec189e3bde34ff84c46389, REOPEN/MOVE in 1.1510 sec 2023-07-21 15:16:20,499 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:16:20,499 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/15fdaef33b9647fab27918fa7b51727e 2023-07-21 15:16:20,512 DEBUG [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/info/f7f6dd522e854d8fab91aaec79abb8df 2023-07-21 15:16:20,512 INFO [StoreOpener-7697a92683cfac49519e4a4111355983-1] regionserver.HStore(310): Store=7697a92683cfac49519e4a4111355983/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:20,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:20,518 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7697a92683cfac49519e4a4111355983; next sequenceid=27; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10060040640, jitterRate=-0.06308570504188538}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 15:16:20,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:16:20,518 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., pid=145, masterSystemTime=1689952580451 2023-07-21 15:16:20,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:20,521 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=7697a92683cfac49519e4a4111355983, regionState=OPEN, openSeqNum=27, regionLocation=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:20,521 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689952580521"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952580521"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952580521"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952580521"}]},"ts":"1689952580521"} 2023-07-21 15:16:20,525 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=137 2023-07-21 15:16:20,525 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=137, state=SUCCESS; OpenRegionProcedure 7697a92683cfac49519e4a4111355983, server=jenkins-hbase17.apache.org,44851,1689952578807 in 223 msec 2023-07-21 15:16:20,526 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7697a92683cfac49519e4a4111355983, REOPEN/MOVE in 1.1850 sec 2023-07-21 15:16:20,607 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e66f96fe3a93ede34be690ff9e55183e, NAME => 'hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 15:16:20,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 15:16:20,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,609 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,610 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/q 2023-07-21 15:16:20,610 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/q 2023-07-21 15:16:20,611 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e66f96fe3a93ede34be690ff9e55183e columnFamilyName q 2023-07-21 15:16:20,611 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(310): Store=e66f96fe3a93ede34be690ff9e55183e/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:20,612 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,613 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/u 2023-07-21 15:16:20,613 DEBUG [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/u 2023-07-21 15:16:20,613 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e66f96fe3a93ede34be690ff9e55183e columnFamilyName u 2023-07-21 15:16:20,614 INFO [StoreOpener-e66f96fe3a93ede34be690ff9e55183e-1] regionserver.HStore(310): Store=e66f96fe3a93ede34be690ff9e55183e/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 15:16:20,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,618 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 15:16:20,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:20,623 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened e66f96fe3a93ede34be690ff9e55183e; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11645034880, jitterRate=0.08452838659286499}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 15:16:20,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:20,624 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e., pid=147, masterSystemTime=1689952580603 2023-07-21 15:16:20,627 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,627 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:20,627 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=e66f96fe3a93ede34be690ff9e55183e, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:20,628 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689952580627"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689952580627"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689952580627"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689952580627"}]},"ts":"1689952580627"} 2023-07-21 15:16:20,632 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=138 2023-07-21 15:16:20,632 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=138, state=SUCCESS; OpenRegionProcedure e66f96fe3a93ede34be690ff9e55183e, server=jenkins-hbase17.apache.org,44851,1689952578807 in 179 msec 2023-07-21 15:16:20,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=e66f96fe3a93ede34be690ff9e55183e, REOPEN/MOVE in 1.2910 sec 2023-07-21 15:16:21,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,35121,1689952569592, jenkins-hbase17.apache.org,36003,1689952569461, jenkins-hbase17.apache.org,41609,1689952569336] are moved back to default 2023-07-21 15:16:21,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:21,344 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35121] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Scan size: 136 connection: 136.243.18.41:37878 deadline: 1689952641344, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=44851 startCode=1689952578807. As of locationSeqNum=95. 2023-07-21 15:16:21,448 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41609] ipc.CallRunner(144): callId: 5 service: ClientService methodName: Get size: 88 connection: 136.243.18.41:52768 deadline: 1689952641447, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=44851 startCode=1689952578807. As of locationSeqNum=172. 2023-07-21 15:16:21,550 DEBUG [hconnection-0x52e8eba-shared-pool-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 15:16:21,551 INFO [RS-EventLoopGroup-17-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 15:16:21,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:21,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:21,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:21,574 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 15:16:21,575 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37892, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 15:16:21,575 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35121] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,35121,1689952569592' ***** 2023-07-21 15:16:21,575 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35121] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x2fef016b 2023-07-21 15:16:21,575 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:21,579 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:21,579 INFO [RS:2;jenkins-hbase17:35121] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27f832bf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:21,580 INFO [RS:2;jenkins-hbase17:35121] server.AbstractConnector(383): Stopped ServerConnector@61d3f6e5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:21,580 INFO [RS:2;jenkins-hbase17:35121] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:21,581 INFO [RS:2;jenkins-hbase17:35121] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f3c2ab8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:21,582 INFO [RS:2;jenkins-hbase17:35121] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@66a621a5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:21,582 INFO [RS:2;jenkins-hbase17:35121] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:21,583 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:21,583 INFO [RS:2;jenkins-hbase17:35121] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:21,583 INFO [RS:2;jenkins-hbase17:35121] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:21,583 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:21,583 DEBUG [RS:2;jenkins-hbase17:35121] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b51046c to 127.0.0.1:62052 2023-07-21 15:16:21,583 DEBUG [RS:2;jenkins-hbase17:35121] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:21,583 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,35121,1689952569592; all regions closed. 2023-07-21 15:16:21,589 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592/jenkins-hbase17.apache.org%2C35121%2C1689952569592.1689952570139 not finished, retry = 0 2023-07-21 15:16:21,632 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:21,692 DEBUG [RS:2;jenkins-hbase17:35121] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:21,692 INFO [RS:2;jenkins-hbase17:35121] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C35121%2C1689952569592:(num 1689952570139) 2023-07-21 15:16:21,692 DEBUG [RS:2;jenkins-hbase17:35121] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:21,693 INFO [RS:2;jenkins-hbase17:35121] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:21,693 INFO [RS:2;jenkins-hbase17:35121] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:21,693 INFO [RS:2;jenkins-hbase17:35121] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:21,693 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:21,693 INFO [RS:2;jenkins-hbase17:35121] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:21,693 INFO [RS:2;jenkins-hbase17:35121] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:21,694 INFO [RS:2;jenkins-hbase17:35121] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:35121 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:21,696 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:21,700 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,35121,1689952569592] 2023-07-21 15:16:21,701 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,35121,1689952569592; numProcessing=1 2023-07-21 15:16:21,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:21,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:21,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:21,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:21,701 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,35121,1689952569592 already deleted, retry=false 2023-07-21 15:16:21,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:21,701 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase17.apache.org,35121,1689952569592 on jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:21,702 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:21,702 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:21,702 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:21,702 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 znode expired, triggering replicatorRemoved event 2023-07-21 15:16:21,702 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 znode expired, triggering replicatorRemoved event 2023-07-21 15:16:21,702 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:21,702 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase17.apache.org,35121,1689952569592 znode expired, triggering replicatorRemoved event 2023-07-21 15:16:21,702 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=148, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase17.apache.org,35121,1689952569592, splitWal=true, meta=false 2023-07-21 15:16:21,702 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=148 for jenkins-hbase17.apache.org,35121,1689952569592 (carryingMeta=false) jenkins-hbase17.apache.org,35121,1689952569592/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@44dc0720[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-21 15:16:21,703 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:21,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:21,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:21,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:21,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:21,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:21,704 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=148, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,35121,1689952569592, splitWal=true, meta=false 2023-07-21 15:16:21,704 WARN [RS-EventLoopGroup-16-2] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase17.apache.org/136.243.18.41:35121 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:35121 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 15:16:21,704 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:21,705 DEBUG [RS-EventLoopGroup-16-2] ipc.FailedServers(52): Added failed server with address jenkins-hbase17.apache.org/136.243.18.41:35121 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase17.apache.org/136.243.18.41:35121 2023-07-21 15:16:21,706 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:21,706 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase17.apache.org,35121,1689952569592 had 0 regions 2023-07-21 15:16:21,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:21,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:21,708 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=148, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase17.apache.org,35121,1689952569592, splitWal=true, meta=false, isMeta: false 2023-07-21 15:16:21,709 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592-splitting 2023-07-21 15:16:21,710 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592-splitting dir is empty, no logs to split. 2023-07-21 15:16:21,710 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,35121,1689952569592 WAL count=0, meta=false 2023-07-21 15:16:21,712 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592-splitting dir is empty, no logs to split. 2023-07-21 15:16:21,712 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase17.apache.org,35121,1689952569592 WAL count=0, meta=false 2023-07-21 15:16:21,712 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase17.apache.org,35121,1689952569592 WAL splitting is done? wals=0, meta=false 2023-07-21 15:16:21,714 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase17.apache.org,35121,1689952569592 failed, ignore...File hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,35121,1689952569592-splitting does not exist. 2023-07-21 15:16:21,715 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase17.apache.org,35121,1689952569592 after splitting done 2023-07-21 15:16:21,715 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase17.apache.org,35121,1689952569592 from processing; numProcessing=0 2023-07-21 15:16:21,716 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, state=SUCCESS; ServerCrashProcedure jenkins-hbase17.apache.org,35121,1689952569592, splitWal=true, meta=false in 13 msec 2023-07-21 15:16:21,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(2362): Client=jenkins//136.243.18.41 clear dead region servers. 2023-07-21 15:16:21,801 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:21,801 INFO [RS:2;jenkins-hbase17:35121] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,35121,1689952569592; zookeeper connection closed. 2023-07-21 15:16:21,801 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:35121-0x1018872b379001f, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:21,801 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f4a7432] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1f4a7432 2023-07-21 15:16:21,815 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:21,815 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:21,815 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,815 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:21,816 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 15:16:21,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:21,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:21,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 15:16:21,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase17.apache.org:35121] from RSGroup done 2023-07-21 15:16:21,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:21,826 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36003] ipc.CallRunner(144): callId: 84 service: ClientService methodName: Scan size: 146 connection: 136.243.18.41:60626 deadline: 1689952641826, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase17.apache.org port=44851 startCode=1689952578807. As of locationSeqNum=24. 2023-07-21 15:16:21,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:21,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:21,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:21,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:21,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:21,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [] to rsgroup default 2023-07-21 15:16:21,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:21,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup master 2023-07-21 15:16:21,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:21,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 15:16:21,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:21,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//136.243.18.41 move tables [] to rsgroup default 2023-07-21 15:16:21,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 15:16:21,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveTables 2023-07-21 15:16:21,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:41609, jenkins-hbase17.apache.org:36003] to rsgroup default 2023-07-21 15:16:21,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:21,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:21,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_1400871057, current retry=0 2023-07-21 15:16:21,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase17.apache.org,36003,1689952569461, jenkins-hbase17.apache.org,41609,1689952569336] are moved back to Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_1400871057 => default 2023-07-21 15:16:21,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.MoveServers 2023-07-21 15:16:21,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//136.243.18.41 remove rsgroup Group_testClearDeadServers_1400871057 2023-07-21 15:16:21,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:21,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 15:16:21,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 15:16:21,974 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 15:16:21,984 INFO [Listener at localhost.localdomain/38883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 15:16:21,985 INFO [Listener at localhost.localdomain/38883] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41877 2023-07-21 15:16:21,985 INFO [Listener at localhost.localdomain/38883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 15:16:21,987 DEBUG [Listener at localhost.localdomain/38883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 15:16:21,987 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:21,988 INFO [Listener at localhost.localdomain/38883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 15:16:21,989 INFO [Listener at localhost.localdomain/38883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41877 connecting to ZooKeeper ensemble=127.0.0.1:62052 2023-07-21 15:16:21,991 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:418770x0, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 15:16:21,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41877-0x1018872b379002a connected 2023-07-21 15:16:21,993 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 15:16:21,993 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 15:16:21,994 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ZKUtil(164): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 15:16:21,996 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41877 2023-07-21 15:16:21,998 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41877 2023-07-21 15:16:21,998 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41877 2023-07-21 15:16:22,000 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41877 2023-07-21 15:16:22,000 DEBUG [Listener at localhost.localdomain/38883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41877 2023-07-21 15:16:22,002 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 15:16:22,002 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 15:16:22,002 INFO [Listener at localhost.localdomain/38883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 15:16:22,003 INFO [Listener at localhost.localdomain/38883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 15:16:22,003 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 15:16:22,003 INFO [Listener at localhost.localdomain/38883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 15:16:22,003 INFO [Listener at localhost.localdomain/38883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 15:16:22,003 INFO [Listener at localhost.localdomain/38883] http.HttpServer(1146): Jetty bound to port 39029 2023-07-21 15:16:22,003 INFO [Listener at localhost.localdomain/38883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 15:16:22,011 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:22,011 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3c620e8d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,AVAILABLE} 2023-07-21 15:16:22,012 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:22,012 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f6d9003{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 15:16:22,116 INFO [Listener at localhost.localdomain/38883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 15:16:22,118 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 15:16:22,119 INFO [Listener at localhost.localdomain/38883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 15:16:22,119 INFO [Listener at localhost.localdomain/38883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 15:16:22,121 INFO [Listener at localhost.localdomain/38883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 15:16:22,122 INFO [Listener at localhost.localdomain/38883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1b5e33bd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/java.io.tmpdir/jetty-0_0_0_0-39029-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8338236800669656251/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:22,124 INFO [Listener at localhost.localdomain/38883] server.AbstractConnector(333): Started ServerConnector@2bd8377b{HTTP/1.1, (http/1.1)}{0.0.0.0:39029} 2023-07-21 15:16:22,124 INFO [Listener at localhost.localdomain/38883] server.Server(415): Started @54935ms 2023-07-21 15:16:22,129 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(951): ClusterId : efdb1c09-bf26-44c2-a633-9f7b8a53fd03 2023-07-21 15:16:22,129 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 15:16:22,130 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 15:16:22,130 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 15:16:22,132 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 15:16:22,133 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ReadOnlyZKClient(139): Connect 0x134125e3 to 127.0.0.1:62052 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 15:16:22,140 DEBUG [RS:4;jenkins-hbase17:41877] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d86c9fb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 15:16:22,140 DEBUG [RS:4;jenkins-hbase17:41877] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19b94e93, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:22,147 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase17:41877 2023-07-21 15:16:22,147 INFO [RS:4;jenkins-hbase17:41877] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 15:16:22,147 INFO [RS:4;jenkins-hbase17:41877] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 15:16:22,147 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 15:16:22,148 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,43821,1689952569195 with isa=jenkins-hbase17.apache.org/136.243.18.41:41877, startcode=1689952581983 2023-07-21 15:16:22,148 DEBUG [RS:4;jenkins-hbase17:41877] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 15:16:22,153 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52491, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 15:16:22,154 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43821] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,154 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 15:16:22,155 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3 2023-07-21 15:16:22,155 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37247 2023-07-21 15:16:22,155 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41793 2023-07-21 15:16:22,156 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,156 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,156 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,156 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:22,156 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,157 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 15:16:22,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,157 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,157 WARN [RS:4;jenkins-hbase17:41877] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 15:16:22,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,157 INFO [RS:4;jenkins-hbase17:41877] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 15:16:22,158 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/WALs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,158 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,41877,1689952581983] 2023-07-21 15:16:22,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,158 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase17.apache.org,43821,1689952569195] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 15:16:22,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:22,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:22,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:22,163 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,164 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,164 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,164 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ZKUtil(162): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:22,165 DEBUG [RS:4;jenkins-hbase17:41877] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 15:16:22,165 INFO [RS:4;jenkins-hbase17:41877] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 15:16:22,167 INFO [RS:4;jenkins-hbase17:41877] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 15:16:22,167 INFO [RS:4;jenkins-hbase17:41877] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 15:16:22,168 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:22,168 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 15:16:22,170 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,172 DEBUG [RS:4;jenkins-hbase17:41877] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-21 15:16:22,182 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:22,182 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:22,182 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:22,192 INFO [RS:4;jenkins-hbase17:41877] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 15:16:22,192 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41877,1689952581983-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 15:16:22,205 INFO [RS:4;jenkins-hbase17:41877] regionserver.Replication(203): jenkins-hbase17.apache.org,41877,1689952581983 started 2023-07-21 15:16:22,205 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,41877,1689952581983, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:41877, sessionid=0x1018872b379002a 2023-07-21 15:16:22,205 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 15:16:22,205 DEBUG [RS:4;jenkins-hbase17:41877] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,205 DEBUG [RS:4;jenkins-hbase17:41877] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41877,1689952581983' 2023-07-21 15:16:22,205 DEBUG [RS:4;jenkins-hbase17:41877] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 15:16:22,206 DEBUG [RS:4;jenkins-hbase17:41877] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 15:16:22,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//136.243.18.41 add rsgroup master 2023-07-21 15:16:22,206 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 15:16:22,206 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 15:16:22,206 DEBUG [RS:4;jenkins-hbase17:41877] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,206 DEBUG [RS:4;jenkins-hbase17:41877] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41877,1689952581983' 2023-07-21 15:16:22,206 DEBUG [RS:4;jenkins-hbase17:41877] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 15:16:22,207 DEBUG [RS:4;jenkins-hbase17:41877] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 15:16:22,207 DEBUG [RS:4;jenkins-hbase17:41877] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 15:16:22,207 INFO [RS:4;jenkins-hbase17:41877] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 15:16:22,207 INFO [RS:4;jenkins-hbase17:41877] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 15:16:22,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 15:16:22,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 15:16:22,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 15:16:22,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 15:16:22,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:22,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:22,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//136.243.18.41 move servers [jenkins-hbase17.apache.org:43821] to rsgroup master 2023-07-21 15:16:22,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 15:16:22,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 119 connection: 136.243.18.41:45536 deadline: 1689953782214, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. 2023-07-21 15:16:22,215 WARN [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase17.apache.org:43821 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 15:16:22,220 INFO [Listener at localhost.localdomain/38883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 15:16:22,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//136.243.18.41 list rsgroup 2023-07-21 15:16:22,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 15:16:22,221 INFO [Listener at localhost.localdomain/38883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase17.apache.org:36003, jenkins-hbase17.apache.org:41609, jenkins-hbase17.apache.org:41877, jenkins-hbase17.apache.org:44851], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 15:16:22,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//136.243.18.41 initiates rsgroup info retrieval, group=default 2023-07-21 15:16:22,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43821] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /136.243.18.41) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 15:16:22,248 INFO [Listener at localhost.localdomain/38883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=584 (was 554) - Thread LEAK? -, OpenFileDescriptor=903 (was 843) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=705 (was 697) - SystemLoadAverage LEAK? -, ProcessCount=186 (was 189), AvailableMemoryMB=1692 (was 1834) 2023-07-21 15:16:22,248 WARN [Listener at localhost.localdomain/38883] hbase.ResourceChecker(130): Thread=584 is superior to 500 2023-07-21 15:16:22,248 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 15:16:22,248 INFO [Listener at localhost.localdomain/38883] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 15:16:22,248 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x005f37fb to 127.0.0.1:62052 2023-07-21 15:16:22,248 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,248 DEBUG [Listener at localhost.localdomain/38883] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 15:16:22,248 DEBUG [Listener at localhost.localdomain/38883] util.JVMClusterUtil(257): Found active master hash=523716767, stopped=false 2023-07-21 15:16:22,249 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 15:16:22,249 DEBUG [Listener at localhost.localdomain/38883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 15:16:22,249 INFO [Listener at localhost.localdomain/38883] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:22,250 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:22,250 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:22,250 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:22,250 INFO [Listener at localhost.localdomain/38883] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 15:16:22,250 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:22,250 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 15:16:22,251 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:22,251 DEBUG [Listener at localhost.localdomain/38883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x071edc8b to 127.0.0.1:62052 2023-07-21 15:16:22,251 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:22,251 DEBUG [Listener at localhost.localdomain/38883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,252 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:22,252 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:22,252 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:22,252 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,41609,1689952569336' ***** 2023-07-21 15:16:22,252 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:22,252 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,36003,1689952569461' ***** 2023-07-21 15:16:22,252 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:22,252 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:22,252 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 15:16:22,252 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,44851,1689952578807' ***** 2023-07-21 15:16:22,252 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:22,253 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:22,256 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,41877,1689952581983' ***** 2023-07-21 15:16:22,256 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:22,256 INFO [Listener at localhost.localdomain/38883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 15:16:22,256 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:22,256 INFO [RS:1;jenkins-hbase17:36003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6ca63760{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:22,260 INFO [RS:1;jenkins-hbase17:36003] server.AbstractConnector(383): Stopped ServerConnector@3975476a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:22,260 INFO [RS:1;jenkins-hbase17:36003] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:22,261 INFO [RS:0;jenkins-hbase17:41609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27a61248{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:22,263 INFO [RS:1;jenkins-hbase17:36003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b3ffca7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:22,266 INFO [RS:0;jenkins-hbase17:41609] server.AbstractConnector(383): Stopped ServerConnector@2b376f34{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:22,266 INFO [RS:0;jenkins-hbase17:41609] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:22,266 INFO [RS:3;jenkins-hbase17:44851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@261e4b91{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:22,266 INFO [RS:4;jenkins-hbase17:41877] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1b5e33bd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 15:16:22,268 INFO [RS:0;jenkins-hbase17:41609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@45fd876{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:22,268 INFO [RS:1;jenkins-hbase17:36003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@450aba01{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:22,269 INFO [RS:0;jenkins-hbase17:41609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7bfc31fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:22,269 INFO [RS:3;jenkins-hbase17:44851] server.AbstractConnector(383): Stopped ServerConnector@6f180e2b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:22,270 INFO [RS:3;jenkins-hbase17:44851] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:22,271 INFO [RS:3;jenkins-hbase17:44851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46bd3e7f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:22,272 INFO [RS:3;jenkins-hbase17:44851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@257ebbb2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:22,272 INFO [RS:0;jenkins-hbase17:41609] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:22,272 INFO [RS:4;jenkins-hbase17:41877] server.AbstractConnector(383): Stopped ServerConnector@2bd8377b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:22,272 INFO [RS:0;jenkins-hbase17:41609] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:22,272 INFO [RS:4;jenkins-hbase17:41877] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:22,272 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:22,272 INFO [RS:3;jenkins-hbase17:44851] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:22,272 INFO [RS:0;jenkins-hbase17:41609] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:22,272 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:22,272 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,273 DEBUG [RS:0;jenkins-hbase17:41609] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b26b8b6 to 127.0.0.1:62052 2023-07-21 15:16:22,273 DEBUG [RS:0;jenkins-hbase17:41609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,272 INFO [RS:3;jenkins-hbase17:44851] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:22,273 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41609,1689952569336; all regions closed. 2023-07-21 15:16:22,273 INFO [RS:1;jenkins-hbase17:36003] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:22,274 INFO [RS:1;jenkins-hbase17:36003] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:22,274 INFO [RS:1;jenkins-hbase17:36003] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:22,273 INFO [RS:4;jenkins-hbase17:41877] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f6d9003{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:22,274 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,275 DEBUG [RS:1;jenkins-hbase17:36003] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x660e1ef9 to 127.0.0.1:62052 2023-07-21 15:16:22,275 DEBUG [RS:1;jenkins-hbase17:36003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,275 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36003,1689952569461; all regions closed. 2023-07-21 15:16:22,274 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:22,273 INFO [RS:3;jenkins-hbase17:44851] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:22,275 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(3305): Received CLOSE for 7697a92683cfac49519e4a4111355983 2023-07-21 15:16:22,275 INFO [RS:4;jenkins-hbase17:41877] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3c620e8d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:22,276 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(3305): Received CLOSE for e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:22,276 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(3305): Received CLOSE for 603dc738ccec189e3bde34ff84c46389 2023-07-21 15:16:22,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7697a92683cfac49519e4a4111355983, disabling compactions & flushes 2023-07-21 15:16:22,278 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:22,278 DEBUG [RS:3;jenkins-hbase17:44851] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5cc30725 to 127.0.0.1:62052 2023-07-21 15:16:22,278 DEBUG [RS:3;jenkins-hbase17:44851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,279 INFO [RS:3;jenkins-hbase17:44851] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:22,279 INFO [RS:3;jenkins-hbase17:44851] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:22,279 INFO [RS:3;jenkins-hbase17:44851] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:22,279 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 15:16:22,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:22,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:22,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. after waiting 0 ms 2023-07-21 15:16:22,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:22,280 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 15:16:22,283 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,284 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 15:16:22,284 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1478): Online Regions={7697a92683cfac49519e4a4111355983=hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983., e66f96fe3a93ede34be690ff9e55183e=hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e., 1588230740=hbase:meta,,1.1588230740, 603dc738ccec189e3bde34ff84c46389=hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389.} 2023-07-21 15:16:22,284 DEBUG [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1504): Waiting on 1588230740, 603dc738ccec189e3bde34ff84c46389, 7697a92683cfac49519e4a4111355983, e66f96fe3a93ede34be690ff9e55183e 2023-07-21 15:16:22,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 15:16:22,287 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 15:16:22,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 15:16:22,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 15:16:22,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 15:16:22,287 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.43 KB heapSize=6.39 KB 2023-07-21 15:16:22,296 INFO [RS:4;jenkins-hbase17:41877] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 15:16:22,296 INFO [RS:4;jenkins-hbase17:41877] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,297 DEBUG [RS:4;jenkins-hbase17:41877] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x134125e3 to 127.0.0.1:62052 2023-07-21 15:16:22,297 DEBUG [RS:4;jenkins-hbase17:41877] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41877,1689952581983; all regions closed. 2023-07-21 15:16:22,297 DEBUG [RS:4;jenkins-hbase17:41877] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:22,297 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:22,297 INFO [RS:4;jenkins-hbase17:41877] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:22,299 INFO [RS:4;jenkins-hbase17:41877] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41877 2023-07-21 15:16:22,301 DEBUG [RS:1;jenkins-hbase17:36003] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:22,301 INFO [RS:1;jenkins-hbase17:36003] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C36003%2C1689952569461:(num 1689952570120) 2023-07-21 15:16:22,301 DEBUG [RS:1;jenkins-hbase17:36003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,301 INFO [RS:1;jenkins-hbase17:36003] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,301 DEBUG [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:22,302 INFO [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C41609%2C1689952569336.meta:.meta(num 1689952570225) 2023-07-21 15:16:22,302 INFO [RS:1;jenkins-hbase17:36003] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:22,311 INFO [RS:1;jenkins-hbase17:36003] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:22,312 INFO [RS:1;jenkins-hbase17:36003] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:22,312 INFO [RS:1;jenkins-hbase17:36003] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:22,312 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:22,313 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/namespace/7697a92683cfac49519e4a4111355983/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=26 2023-07-21 15:16:22,317 INFO [RS:1;jenkins-hbase17:36003] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36003 2023-07-21 15:16:22,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:22,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7697a92683cfac49519e4a4111355983: 2023-07-21 15:16:22,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689952539232.7697a92683cfac49519e4a4111355983. 2023-07-21 15:16:22,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing e66f96fe3a93ede34be690ff9e55183e, disabling compactions & flushes 2023-07-21 15:16:22,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:22,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:22,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. after waiting 0 ms 2023-07-21 15:16:22,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:22,335 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,346 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/quota/e66f96fe3a93ede34be690ff9e55183e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 15:16:22,362 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:22,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for e66f96fe3a93ede34be690ff9e55183e: 2023-07-21 15:16:22,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689952566088.e66f96fe3a93ede34be690ff9e55183e. 2023-07-21 15:16:22,363 DEBUG [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:22,363 INFO [RS:0;jenkins-hbase17:41609] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C41609%2C1689952569336:(num 1689952570141) 2023-07-21 15:16:22,363 DEBUG [RS:0;jenkins-hbase17:41609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,363 INFO [RS:0;jenkins-hbase17:41609] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,371 INFO [RS:0;jenkins-hbase17:41609] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:22,371 INFO [RS:0;jenkins-hbase17:41609] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 15:16:22,371 INFO [RS:0;jenkins-hbase17:41609] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 15:16:22,371 INFO [RS:0;jenkins-hbase17:41609] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 15:16:22,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 603dc738ccec189e3bde34ff84c46389, disabling compactions & flushes 2023-07-21 15:16:22,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:22,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:22,372 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:22,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. after waiting 0 ms 2023-07-21 15:16:22,373 INFO [RS:0;jenkins-hbase17:41609] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41609 2023-07-21 15:16:22,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:22,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 603dc738ccec189e3bde34ff84c46389 1/1 column families, dataSize=2.10 KB heapSize=3.54 KB 2023-07-21 15:16:22,379 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.43 KB at sequenceid=188 (bloomFilter=false), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/2f5047c1ba06427cb65775dc246d533f 2023-07-21 15:16:22,381 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,381 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41609,1689952569336 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,382 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,383 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36003,1689952569461 2023-07-21 15:16:22,383 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41877,1689952581983 2023-07-21 15:16:22,383 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,41609,1689952569336] 2023-07-21 15:16:22,383 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41609,1689952569336; numProcessing=1 2023-07-21 15:16:22,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/.tmp/info/2f5047c1ba06427cb65775dc246d533f as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/2f5047c1ba06427cb65775dc246d533f 2023-07-21 15:16:22,397 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/2f5047c1ba06427cb65775dc246d533f, entries=30, sequenceid=188, filesize=8.2 K 2023-07-21 15:16:22,398 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.43 KB/3516, heapSize ~5.88 KB/6016, currentSize=0 B/0 for 1588230740 in 111ms, sequenceid=188, compaction requested=false 2023-07-21 15:16:22,398 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 15:16:22,415 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64] to archive 2023-07-21 15:16:22,416 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 15:16:22,419 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/hbase/meta/1588230740/info/61fcafcc9c244e3eb1f1f966564d855c 2023-07-21 15:16:22,420 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/hbase/meta/1588230740/info/018c0ea790dd452bbb94d051c83f4c99 2023-07-21 15:16:22,422 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64 to hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/archive/data/hbase/meta/1588230740/info/332f83b68a1a40e2b33e4b32f5c50f64 2023-07-21 15:16:22,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.10 KB at sequenceid=108 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/91090b1affe24e38a56c46d1c5473964 2023-07-21 15:16:22,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/meta/1588230740/recovered.edits/191.seqid, newMaxSeqId=191, maxSeqId=175 2023-07-21 15:16:22,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91090b1affe24e38a56c46d1c5473964 2023-07-21 15:16:22,446 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:22,446 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:22,446 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 15:16:22,447 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 15:16:22,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/.tmp/m/91090b1affe24e38a56c46d1c5473964 as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/91090b1affe24e38a56c46d1c5473964 2023-07-21 15:16:22,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91090b1affe24e38a56c46d1c5473964 2023-07-21 15:16:22,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/m/91090b1affe24e38a56c46d1c5473964, entries=4, sequenceid=108, filesize=5.3 K 2023-07-21 15:16:22,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.10 KB/2150, heapSize ~3.52 KB/3608, currentSize=0 B/0 for 603dc738ccec189e3bde34ff84c46389 in 80ms, sequenceid=108, compaction requested=true 2023-07-21 15:16:22,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 15:16:22,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/data/hbase/rsgroup/603dc738ccec189e3bde34ff84c46389/recovered.edits/111.seqid, newMaxSeqId=111, maxSeqId=98 2023-07-21 15:16:22,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 15:16:22,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:22,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 603dc738ccec189e3bde34ff84c46389: 2023-07-21 15:16:22,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689952539161.603dc738ccec189e3bde34ff84c46389. 2023-07-21 15:16:22,484 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,44851,1689952578807; all regions closed. 2023-07-21 15:16:22,491 DEBUG [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:22,491 INFO [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C44851%2C1689952578807.meta:.meta(num 1689952579904) 2023-07-21 15:16:22,499 DEBUG [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/oldWALs 2023-07-21 15:16:22,500 INFO [RS:3;jenkins-hbase17:44851] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C44851%2C1689952578807:(num 1689952579169) 2023-07-21 15:16:22,500 DEBUG [RS:3;jenkins-hbase17:44851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,500 INFO [RS:3;jenkins-hbase17:44851] regionserver.LeaseManager(133): Closed leases 2023-07-21 15:16:22,500 INFO [RS:3;jenkins-hbase17:44851] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 15:16:22,500 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:22,501 INFO [RS:3;jenkins-hbase17:44851] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:44851 2023-07-21 15:16:22,583 INFO [RS:0;jenkins-hbase17:41609] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41609,1689952569336; zookeeper connection closed. 2023-07-21 15:16:22,583 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,583 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41609-0x1018872b379001d, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,583 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@16f19a63] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@16f19a63 2023-07-21 15:16:22,584 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 15:16:22,584 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,44851,1689952578807 2023-07-21 15:16:22,584 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41609,1689952569336 already deleted, retry=false 2023-07-21 15:16:22,584 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,41609,1689952569336 expired; onlineServers=3 2023-07-21 15:16:22,584 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,41877,1689952581983] 2023-07-21 15:16:22,584 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41877,1689952581983; numProcessing=2 2023-07-21 15:16:22,585 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41877,1689952581983 already deleted, retry=false 2023-07-21 15:16:22,585 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,41877,1689952581983 expired; onlineServers=2 2023-07-21 15:16:22,585 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36003,1689952569461] 2023-07-21 15:16:22,585 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36003,1689952569461; numProcessing=3 2023-07-21 15:16:22,586 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36003,1689952569461 already deleted, retry=false 2023-07-21 15:16:22,586 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36003,1689952569461 expired; onlineServers=1 2023-07-21 15:16:22,586 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,44851,1689952578807] 2023-07-21 15:16:22,586 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,44851,1689952578807; numProcessing=4 2023-07-21 15:16:22,586 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,44851,1689952578807 already deleted, retry=false 2023-07-21 15:16:22,586 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,44851,1689952578807 expired; onlineServers=0 2023-07-21 15:16:22,586 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,43821,1689952569195' ***** 2023-07-21 15:16:22,587 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 15:16:22,587 DEBUG [M:0;jenkins-hbase17:43821] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b08cfa1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-21 15:16:22,587 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 15:16:22,589 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 15:16:22,589 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 15:16:22,589 INFO [M:0;jenkins-hbase17:43821] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@395f40d9{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 15:16:22,589 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 15:16:22,590 INFO [M:0;jenkins-hbase17:43821] server.AbstractConnector(383): Stopped ServerConnector@58b12215{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:22,590 INFO [M:0;jenkins-hbase17:43821] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 15:16:22,591 INFO [M:0;jenkins-hbase17:43821] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a45b6fe{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 15:16:22,591 INFO [M:0;jenkins-hbase17:43821] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c7d6ff4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/hadoop.log.dir/,STOPPED} 2023-07-21 15:16:22,592 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43821,1689952569195 2023-07-21 15:16:22,592 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43821,1689952569195; all regions closed. 2023-07-21 15:16:22,592 DEBUG [M:0;jenkins-hbase17:43821] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 15:16:22,592 INFO [M:0;jenkins-hbase17:43821] master.HMaster(1491): Stopping master jetty server 2023-07-21 15:16:22,593 INFO [M:0;jenkins-hbase17:43821] server.AbstractConnector(383): Stopped ServerConnector@7b2ccb1f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 15:16:22,598 DEBUG [M:0;jenkins-hbase17:43821] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 15:16:22,599 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 15:16:22,599 DEBUG [M:0;jenkins-hbase17:43821] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 15:16:22,599 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952569896] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1689952569896,5,FailOnTimeoutGroup] 2023-07-21 15:16:22,599 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952569893] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1689952569893,5,FailOnTimeoutGroup] 2023-07-21 15:16:22,599 INFO [M:0;jenkins-hbase17:43821] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 15:16:22,599 INFO [M:0;jenkins-hbase17:43821] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 15:16:22,599 INFO [M:0;jenkins-hbase17:43821] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-21 15:16:22,599 DEBUG [M:0;jenkins-hbase17:43821] master.HMaster(1512): Stopping service threads 2023-07-21 15:16:22,599 INFO [M:0;jenkins-hbase17:43821] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 15:16:22,599 ERROR [M:0;jenkins-hbase17:43821] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 15:16:22,599 INFO [M:0;jenkins-hbase17:43821] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 15:16:22,600 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 15:16:22,600 DEBUG [M:0;jenkins-hbase17:43821] zookeeper.ZKUtil(398): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 15:16:22,601 WARN [M:0;jenkins-hbase17:43821] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 15:16:22,601 INFO [M:0;jenkins-hbase17:43821] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 15:16:22,601 INFO [M:0;jenkins-hbase17:43821] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 15:16:22,601 DEBUG [M:0;jenkins-hbase17:43821] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 15:16:22,602 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:22,602 DEBUG [M:0;jenkins-hbase17:43821] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:22,602 DEBUG [M:0;jenkins-hbase17:43821] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 15:16:22,602 DEBUG [M:0;jenkins-hbase17:43821] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:22,602 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=73.98 KB heapSize=90.94 KB 2023-07-21 15:16:22,617 INFO [M:0;jenkins-hbase17:43821] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.98 KB at sequenceid=1131 (bloomFilter=true), to=hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d1f8202090ae43c0b69c9a317744c51a 2023-07-21 15:16:22,624 DEBUG [M:0;jenkins-hbase17:43821] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d1f8202090ae43c0b69c9a317744c51a as hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d1f8202090ae43c0b69c9a317744c51a 2023-07-21 15:16:22,631 INFO [M:0;jenkins-hbase17:43821] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37247/user/jenkins/test-data/567399bb-b412-8a40-e7f2-352096548ea3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d1f8202090ae43c0b69c9a317744c51a, entries=24, sequenceid=1131, filesize=8.3 K 2023-07-21 15:16:22,632 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegion(2948): Finished flush of dataSize ~73.98 KB/75759, heapSize ~90.92 KB/93104, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=1131, compaction requested=true 2023-07-21 15:16:22,633 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 15:16:22,633 DEBUG [M:0;jenkins-hbase17:43821] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 15:16:22,639 INFO [M:0;jenkins-hbase17:43821] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 15:16:22,639 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 15:16:22,640 INFO [M:0;jenkins-hbase17:43821] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43821 2023-07-21 15:16:22,641 DEBUG [M:0;jenkins-hbase17:43821] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,43821,1689952569195 already deleted, retry=false 2023-07-21 15:16:22,650 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,650 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:41877-0x1018872b379002a, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,650 INFO [RS:4;jenkins-hbase17:41877] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41877,1689952581983; zookeeper connection closed. 2023-07-21 15:16:22,652 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d00c1d4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d00c1d4 2023-07-21 15:16:22,751 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,751 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:36003-0x1018872b379001e, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,751 INFO [RS:1;jenkins-hbase17:36003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36003,1689952569461; zookeeper connection closed. 2023-07-21 15:16:22,751 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@543332be] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@543332be 2023-07-21 15:16:22,951 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:22,951 INFO [M:0;jenkins-hbase17:43821] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43821,1689952569195; zookeeper connection closed. 2023-07-21 15:16:22,951 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): master:43821-0x1018872b379001c, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:23,051 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:23,051 INFO [RS:3;jenkins-hbase17:44851] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,44851,1689952578807; zookeeper connection closed. 2023-07-21 15:16:23,051 DEBUG [Listener at localhost.localdomain/38883-EventThread] zookeeper.ZKWatcher(600): regionserver:44851-0x1018872b3790028, quorum=127.0.0.1:62052, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 15:16:23,052 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4433078b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4433078b 2023-07-21 15:16:23,052 INFO [Listener at localhost.localdomain/38883] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-21 15:16:23,052 WARN [Listener at localhost.localdomain/38883] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:16:23,059 INFO [Listener at localhost.localdomain/38883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:16:23,157 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-21 15:16:23,166 WARN [BP-642756276-136.243.18.41-1689952529515 heartbeating to localhost.localdomain/127.0.0.1:37247] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:16:23,166 WARN [BP-642756276-136.243.18.41-1689952529515 heartbeating to localhost.localdomain/127.0.0.1:37247] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-642756276-136.243.18.41-1689952529515 (Datanode Uuid e1a1719c-0d63-4736-b78e-7293476d32dc) service to localhost.localdomain/127.0.0.1:37247 2023-07-21 15:16:23,168 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data5/current/BP-642756276-136.243.18.41-1689952529515] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:16:23,168 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data6/current/BP-642756276-136.243.18.41-1689952529515] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:16:23,171 WARN [Listener at localhost.localdomain/38883] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:16:23,179 INFO [Listener at localhost.localdomain/38883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:16:23,287 WARN [BP-642756276-136.243.18.41-1689952529515 heartbeating to localhost.localdomain/127.0.0.1:37247] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:16:23,287 WARN [BP-642756276-136.243.18.41-1689952529515 heartbeating to localhost.localdomain/127.0.0.1:37247] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-642756276-136.243.18.41-1689952529515 (Datanode Uuid 46efc94c-862b-40f5-85ed-c5871b5b137b) service to localhost.localdomain/127.0.0.1:37247 2023-07-21 15:16:23,288 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data3/current/BP-642756276-136.243.18.41-1689952529515] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:16:23,288 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data4/current/BP-642756276-136.243.18.41-1689952529515] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:16:23,290 WARN [Listener at localhost.localdomain/38883] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 15:16:23,292 INFO [Listener at localhost.localdomain/38883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 15:16:23,398 WARN [BP-642756276-136.243.18.41-1689952529515 heartbeating to localhost.localdomain/127.0.0.1:37247] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 15:16:23,398 WARN [BP-642756276-136.243.18.41-1689952529515 heartbeating to localhost.localdomain/127.0.0.1:37247] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-642756276-136.243.18.41-1689952529515 (Datanode Uuid d7cc208c-0cc0-44f3-b6d2-9546a365644e) service to localhost.localdomain/127.0.0.1:37247 2023-07-21 15:16:23,399 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data1/current/BP-642756276-136.243.18.41-1689952529515] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:16:23,399 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6c4e6b0d-36fd-68ca-315e-31f7e4b039ba/cluster_899d2ac9-a566-db2c-b12a-5ad6dc1f605a/dfs/data/data2/current/BP-642756276-136.243.18.41-1689952529515] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 15:16:23,426 INFO [Listener at localhost.localdomain/38883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 15:16:23,556 INFO [Listener at localhost.localdomain/38883] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 15:16:23,618 INFO [Listener at localhost.localdomain/38883] hbase.HBaseTestingUtility(1293): Minicluster is down